{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4989","id":1376832233,"node_id":"I_kwDODunzps5SEMrp","number":4989,"title":"Running add_column() seems to corrupt existing sequence-type column info","user":{"login":"derek-rocheleau","id":93728165,"node_id":"U_kgDOBZYtpQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93728165?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/derek-rocheleau","html_url":"https:\/\/github.com\/derek-rocheleau","followers_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/followers","following_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/orgs","repos_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/repos","events_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1663436525000,"updated_at":1663436525000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"I have a dataset that contains a column (\"foo\") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like:\r\n\r\nds = load_dataset(...)\r\ndf = ds.to_pandas()\r\n\r\ndf:\r\nfoo_0 | foo_1 | foo_2 | foo_3\r\n0.0 | 1.0 | 2.0 | 3.0\r\n\r\nIf I run .add_column(\"new_col\", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be:\r\n\r\nds = load_dataset(...)\r\nnew_ds = ds.add_column(\"new_col\", data)\r\ndf = new_ds.to_pandas()\r\n\r\ndf:\r\nfoo | new_col\r\n[0.0, 1.0, 2.0, 3.0] | new_val\r\n\r\nI've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4988","id":1376096584,"node_id":"I_kwDODunzps5SBZFI","number":4988,"title":"Add `IterableDataset.from_generator` to the API","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"open","locked":false,"assignee":{"login":"hamid-vakilzadeh","id":56002455,"node_id":"MDQ6VXNlcjU2MDAyNDU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56002455?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh","html_url":"https:\/\/github.com\/hamid-vakilzadeh","followers_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/followers","following_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/orgs","repos_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/repos","events_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/received_events","type":"User","site_admin":false},"assignees":[{"login":"hamid-vakilzadeh","id":56002455,"node_id":"MDQ6VXNlcjU2MDAyNDU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56002455?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh","html_url":"https:\/\/github.com\/hamid-vakilzadeh","followers_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/followers","following_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/orgs","repos_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/repos","events_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["#take"],"created_at":1663341581000,"updated_at":1663434419000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.\r\n\r\ncc @lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4987","id":1376006477,"node_id":"PR_kwDODunzps4_GlIu","number":4987,"title":"Embed image\/audio data in dl_and_prepare parquet","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663337367000,"updated_at":1663345487000,"closed_at":1663345355000,"author_association":"MEMBER","active_lock_reason":null,"body":"Embed the bytes of the image or audio files in the Parquet files directly, instead of having a \"path\" that points to a local file.\r\n\r\nIndeed Parquet files are often used to share data or to be used by workers that may not have access to the local files.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4987","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4987","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4987.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4987.patch","merged_at":1663345355000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4986","id":1375895035,"node_id":"PR_kwDODunzps4_GNSd","number":4986,"title":"[doc] Fix broken snippet that had too many quotes","user":{"login":"tomaarsen","id":37621491,"node_id":"MDQ6VXNlcjM3NjIxNDkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37621491?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tomaarsen","html_url":"https:\/\/github.com\/tomaarsen","followers_url":"https:\/\/api.github.com\/users\/tomaarsen\/followers","following_url":"https:\/\/api.github.com\/users\/tomaarsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tomaarsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tomaarsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tomaarsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tomaarsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/tomaarsen\/repos","events_url":"https:\/\/api.github.com\/users\/tomaarsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tomaarsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in `datasets`, hah!\r\n\r\nAs for this PR, the issue seems solved according to the [new PR documentation](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4986\/en\/process#map):\r\n![image](https:\/\/user-images.githubusercontent.com\/37621491\/190646405-6afa06fa-9eac-48f6-ab30-2677944fb7b6.png)\r\n"],"created_at":1663332067000,"updated_at":1663366341000,"closed_at":1663349534000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hello!\r\n\r\n### Pull request overview\r\n* Fix broken snippet in https:\/\/huggingface.co\/docs\/datasets\/main\/en\/process that has too many quotes\r\n\r\n### Details\r\nThe snippet in question can be found here: https:\/\/huggingface.co\/docs\/datasets\/main\/en\/process#map\r\nThis screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly:\r\n![image](https:\/\/user-images.githubusercontent.com\/37621491\/190640627-f7587362-0e44-4464-a5d1-a0b98df6986f.png)\r\n\r\nThe change speaks for itself.\r\n\r\nThank you for the detailed documentation, by the way. \r\n\r\n- Tom Aarsen\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4986","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4986","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4986.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4986.patch","merged_at":1663349534000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4985","id":1375807768,"node_id":"PR_kwDODunzps4_F6kU","number":4985,"title":"[WIP] Prefer split patterns from directories over split patterns from filenames","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4985). All of your documentation changes will be reflected on that endpoint."],"created_at":1663327240000,"updated_at":1663334541000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"related to https:\/\/github.com\/huggingface\/datasets\/issues\/4895\r\n\r\ntodo:\r\n\r\n- [ ] test","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4985","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4985","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4985.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4985.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4984","id":1375690330,"node_id":"PR_kwDODunzps4_FhTm","number":4984,"title":"docs: \u270f\ufe0f add links to the Datasets API","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https:\/\/github.com\/huggingface\/datasets-server\/issues\/568"],"created_at":1663320852000,"updated_at":1663333814000,"closed_at":1663333653000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I added some links to the Datasets API in the docs. See https:\/\/github.com\/huggingface\/datasets-server\/pull\/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs.\r\n\r\nI'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4984","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4984","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4984.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4984.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4983","id":1375667654,"node_id":"I_kwDODunzps5R_wXG","number":4983,"title":"How to convert torch.utils.data.Dataset to huggingface dataset?","user":{"login":"DEROOCE","id":77595952,"node_id":"MDQ6VXNlcjc3NTk1OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/77595952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DEROOCE","html_url":"https:\/\/github.com\/DEROOCE","followers_url":"https:\/\/api.github.com\/users\/DEROOCE\/followers","following_url":"https:\/\/api.github.com\/users\/DEROOCE\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DEROOCE\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DEROOCE\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DEROOCE\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DEROOCE\/orgs","repos_url":"https:\/\/api.github.com\/users\/DEROOCE\/repos","events_url":"https:\/\/api.github.com\/users\/DEROOCE\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DEROOCE\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:\r\n # yield ex\r\n\r\ndset = Dataset.from_generator(gen)\r\n```"],"created_at":1663319710000,"updated_at":1663342106000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n```python\r\nfrom datasets import Dataset\r\ndata = [[1, 2],[3, 4]]\r\nds = Dataset.from_dict({\"data\": data})\r\nds = ds.with_format(\"torch\")\r\nds[0]\r\nds[:2]\r\n```\r\nSo is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert?\r\nThanks.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4982","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4982\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4982\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4982\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4982","id":1375604693,"node_id":"I_kwDODunzps5R_g_V","number":4982,"title":"Create dataset_infos.json with VALIDATION and TEST splits","user":{"login":"skalinin","id":26695348,"node_id":"MDQ6VXNlcjI2Njk1MzQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26695348?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skalinin","html_url":"https:\/\/github.com\/skalinin","followers_url":"https:\/\/api.github.com\/users\/skalinin\/followers","following_url":"https:\/\/api.github.com\/users\/skalinin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skalinin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skalinin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skalinin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skalinin\/orgs","repos_url":"https:\/\/api.github.com\/users\/skalinin\/repos","events_url":"https:\/\/api.github.com\/users\/skalinin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skalinin\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1663316479000,"updated_at":1663323163000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"The problem is described in that [issue](https:\/\/github.com\/huggingface\/datasets\/issues\/4895#issuecomment-1247975569). \r\n\r\n> When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error:\r\n> ValueError: Unknown split \"test\". Should be one of ['train'].\r\n> \r\n> The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN\r\n> \r\n> You can find the code here: https:\/\/huggingface.co\/datasets\/sberbank-ai\/Peter\/tree\/add_splits (add_splits branch)\r\n\r\nI tried to clear the cache folder, than I got an another error. I run:\r\n\r\n```\r\nrm -r ~\/.cache\/huggingface \r\ndatasets-cli test Peter.py --save_infos --all_configs\r\n```\r\n\r\nThe error message:\r\n```\r\nUsing custom data configuration default\r\nTesting builder 'default' (1\/1)\r\nDownloading and preparing dataset peter\/default to \/Users\/kalinin\/.cache\/huggingface\/datasets\/peter\/default\/0.0.0\/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4\/4 [00:00<00:00, 5160.63it\/s]\r\nExtracting data files: 0%| | 0\/4 [00:00\r\n sys.exit(main())\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/commands\/datasets_cli.py\", line 39, in main\r\n service.run()\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/commands\/test.py\", line 137, in run\r\n builder.download_and_prepare(\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1227, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 771, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/Users\/kalinin\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/Peter\/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d\/Peter.py\", line 23, in _split_generators\r\n data_files = dl_manager.download_and_extract(_URLS)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/download\/download_manager.py\", line 431, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/download\/download_manager.py\", line 403, in extract\r\n extracted_paths = map_nested(\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 393, in map_nested\r\n mapped = [\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 394, in \r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 330, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py\", line 213, in cached_path\r\n output_path = ExtractManager(cache_dir=download_config.cache_dir).extract(\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/extract.py\", line 46, in extract\r\n self.extractor.extract(input_path, output_path, extractor_format)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/extract.py\", line 263, in extract\r\n with FileLock(lock_path):\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 399, in __init__\r\n max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax\r\nFileNotFoundError: [Errno 2] No such file or directory: ''\r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 328, in __del__\r\n self.release(force=True)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 303, in release\r\n with self._thread_lock:\r\nAttributeError: 'UnixFileLock' object has no attribute '_thread_lock'\r\nExtracting data files: 0%| | 0\/4 [00:00 1 Dataset.from_dict({\"x\": [1.0, 2.0, 3.0]}, features=Features(x=Value(\"float16\")))\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split)\r\n 865 mapping = features.encode_batch(mapping)\r\n 866 mapping = {\r\n 867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col)\r\n 868 for col, data in mapping.items()\r\n 869 }\r\n--> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping)\r\n 871 if info.features is None:\r\n 872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()})\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs)\r\n 734 @classmethod\r\n 735 def from_pydict(cls, *args, **kwargs):\r\n 736 \"\"\"\r\n 737 Construct a Table from Arrow arrays or columns\r\n 738 \r\n (...)\r\n 748 :class:`datasets.table.Table`:\r\n 749 \"\"\"\r\n--> 750 return cls(pa.Table.from_pydict(*args, **kwargs))\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/table.pxi:3648, in pyarrow.lib.Table.from_pydict()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/table.pxi:5174, in pyarrow.lib._from_pydict()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/array.pxi:343, in pyarrow.lib.asarray()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/array.pxi:231, in pyarrow.lib.array()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type)\r\n 192 # otherwise we can finally use the user's type\r\n 193 elif type is not None:\r\n 194 # We use cast_array_to_feature to support casting to custom types like Audio and Image\r\n 195 # Also, when trying type \"string\", we don't want to convert integers or floats to \"string\".\r\n 196 # We only do it if trying_type is False - since this is what the user asks for.\r\n--> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)\r\n 198 return out\r\n 199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:1683, in _wrap_for_chunked_arrays..wrapper(array, *args, **kwargs)\r\n 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n 1682 else:\r\n-> 1683 return func(array, *args, **kwargs)\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str)\r\n 1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)\r\n 1852 elif not isinstance(feature, (Sequence, dict, list, tuple)):\r\n-> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n 1854 raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:1683, in _wrap_for_chunked_arrays..wrapper(array, *args, **kwargs)\r\n 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n 1682 else:\r\n-> 1683 return func(array, *args, **kwargs)\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:1762, in array_cast(array, pa_type, allow_number_to_str)\r\n 1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):\r\n 1761 raise TypeError(f\"Couldn't cast array of type {array.type} to {pa_type}\")\r\n-> 1762 return array.cast(pa_type)\r\n 1763 raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{pa_type}\")\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/array.pxi:919, in pyarrow.lib.Array.cast()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/compute.py:389, in cast(arr, target_type, safe, options)\r\n 387 else:\r\n 388 options = CastOptions.safe(target_type)\r\n--> 389 return call_function(\"cast\", [arr], options)\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/_compute.pyx:560, in pyarrow._compute.call_function()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/_compute.pyx:355, in pyarrow._compute.Function.call()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/error.pxi:121, in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: macOS-12.5.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.4\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4981\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4981\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4980","id":1374868083,"node_id":"I_kwDODunzps5R8tJz","number":4980,"title":"Make `pyarrow` optional","user":{"login":"KOLANICH","id":240344,"node_id":"MDQ6VXNlcjI0MDM0NA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/240344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KOLANICH","html_url":"https:\/\/github.com\/KOLANICH","followers_url":"https:\/\/api.github.com\/users\/KOLANICH\/followers","following_url":"https:\/\/api.github.com\/users\/KOLANICH\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KOLANICH\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KOLANICH\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KOLANICH\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KOLANICH\/orgs","repos_url":"https:\/\/api.github.com\/users\/KOLANICH\/repos","events_url":"https:\/\/api.github.com\/users\/KOLANICH\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KOLANICH\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https:\/\/github.com\/huggingface\/datasets\/blob\/51aef08ad7053c0bfe8f9a961207b26df15850d3\/src\/datasets\/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rewrite \/ a different library with minimal functionality (datasets-lite ?)","Thanks for the proposal, @KOLANICH. And also thanks for your answer, @dconathan.\r\n\r\nIndeed, we are using `pyarrow` as the backend for our datasets, in order to cache them and also allow memory-mapping (using datasets larger than your RAM memory).\r\n\r\nOne way to avoid using `pyarrow` could be loading the datasets in streaming mode, by passing `streaming=True` to `load_dataset`. This way you basically get a generator for the dataset; nothing is downloaded, nor cached. ","Thanks for the info. Could `datasets` then be made optional for `transformers` instead? I used `transformers` only to deal with pretrained models to deploy them (convert to ONNX, and then I use TVM), so I don't really need `pyarrow` and `datasets` by now.\r\n"],"created_at":1663263483000,"updated_at":1663349027000,"closed_at":1663349027000,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nIs `pyarrow` really needed for every dataset?\r\n\r\n**Describe the solution you'd like**\r\nIt is made optional.\r\n\r\n**Describe alternatives you've considered**\r\nLikely, no.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4979","id":1374820758,"node_id":"PR_kwDODunzps4_CouM","number":4979,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663260663000,"updated_at":1663262062000,"closed_at":1663261929000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891\r\n- #4896\r\n- #4908\r\n- #4921\r\n- #4931","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4979","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4979","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4979.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4979.patch","merged_at":1663261929000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4978","id":1374271504,"node_id":"PR_kwDODunzps4_Axnh","number":4978,"title":"Update IndicGLUE download links","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663236357000,"updated_at":1663279220000,"closed_at":1663279054000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4978","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4978","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4978.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4978.patch","merged_at":1663279054000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4977","id":1372962157,"node_id":"I_kwDODunzps5R1b1t","number":4977,"title":"Providing dataset size","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets\/ --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in the middle of removing those JSON files and putting their information directly in the header of the `README.md` (as YAML tags). Normally, the CLI command should continue working but saving its output to the dataset card instead. See:\r\n- #4926","Additionally, the download size can be inferred by doing HEAD requests to the files to be downloaded. And for files hosted on the hub you can even get the file sizes using the Hub API","Amazing @albertvillanova ! I think just having that information visible in the dataset info (without having to do any requests\/additional coding) would be really useful :hugs: "],"created_at":1663160967000,"updated_at":1663257838000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nEspecially for big datasets like [LAION](https:\/\/huggingface.co\/datasets\/laion\/laion2B-en\/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded).\r\n\r\n**Describe the solution you'd like**\r\nAuto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some).\r\n\r\n**Describe alternatives you've considered**\r\nPeople should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face: \r\n\r\n**Additional context**\r\nMentioned to @lhoestq \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4976","id":1372322382,"node_id":"I_kwDODunzps5Ry_pO","number":4976,"title":"Hope to adapt Python3.9 as soon as possible","user":{"login":"RedHeartSecretMan","id":74012141,"node_id":"MDQ6VXNlcjc0MDEyMTQx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/74012141?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/RedHeartSecretMan","html_url":"https:\/\/github.com\/RedHeartSecretMan","followers_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/followers","following_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/orgs","repos_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/repos","events_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?","There is this related issue already: https:\/\/github.com\/huggingface\/datasets\/issues\/4113\r\nAnd I guess we need a CI job for 3.9 ^^"],"created_at":1663130542000,"updated_at":1663256697000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is.\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n**Additional context**\r\nAdd any other context about the feature request here.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4975","id":1371703691,"node_id":"PR_kwDODunzps4-4NXX","number":4975,"title":"Add `fn_kwargs` param to `IterableDataset.map`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663085945000,"updated_at":1663087667000,"closed_at":1663087534000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add the `fn_kwargs` parameter to `IterableDataset.map`.\r\n\r\n(\"Resolves\" https:\/\/discuss.huggingface.co\/t\/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free\/22780\/3)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4975","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4975","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4975.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4975.patch","merged_at":1663087534000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4974","id":1371682020,"node_id":"PR_kwDODunzps4-4Iri","number":4974,"title":"[GH->HF] Part 2: Remove all dataset scripts from github","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4974). All of your documentation changes will be reflected on that endpoint.","So this means metrics will be deleted from this repo in favor of the \"evaluate\" library? Maybe you guys could just redirect metrics to that library."],"created_at":1663084872000,"updated_at":1663527425000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Now that all the datasets live on the Hub we can remove the \/datasets directory that contains all the dataset scripts of this repository\r\n\r\nNeeds https:\/\/github.com\/huggingface\/datasets\/pull\/4973 to be merged first\r\nand PR to be enabled on the Hub for non-namespaced datasets","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4974","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4974","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4974.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4974.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4973","id":1371600074,"node_id":"PR_kwDODunzps4-33JW","number":4973,"title":"[GH->HF] Load datasets from the Hub","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Duplicate of:\r\n- #4059"],"created_at":1663081301000,"updated_at":1663255611000,"closed_at":1663255466000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently datasets with no namespace (e.g. squad, glue) are loaded from github.\r\n\r\nIn this PR I changed this logic to use the Hugging Face Hub instead.\r\n\r\nThis is the first step in removing all the dataset scripts in this repository\r\n\r\nrelated to discussions in https:\/\/github.com\/huggingface\/datasets\/pull\/4059 (I should have continued from this PR actually)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4973","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4973","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4973.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4973.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4972","id":1371443306,"node_id":"PR_kwDODunzps4-3VVF","number":4972,"title":"Fix map batched with torch output","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4972). All of your documentation changes will be reflected on that endpoint."],"created_at":1663074994000,"updated_at":1663256568000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Reported in https:\/\/discuss.huggingface.co\/t\/typeerror-when-applying-map-after-set-format-type-torch\/23067\/2\r\n\r\nCurrently it fails if one uses batched `map` and the map function returns a torch tensor.\r\n\r\nI fixed it for torch, tf, jax and pandas series.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4972","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4972","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4972.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4972.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4971","id":1370319516,"node_id":"PR_kwDODunzps4-zk3g","number":4971,"title":"Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663006104000,"updated_at":1663077068000,"closed_at":1663076925000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform.\r\n\r\nThis makes the behavior inconsistent with `IterableDataset.map`.\r\n \r\n(It seems this issue was introduced by mistake in https:\/\/github.com\/huggingface\/datasets\/pull\/2246) \r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4858","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4971","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4971","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4971.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4971.patch","merged_at":1663076924000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4970","id":1369433074,"node_id":"PR_kwDODunzps4-wkY2","number":4970,"title":"Support streaming nli_tr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662968925000,"updated_at":1662972304000,"closed_at":1662972188000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming nli_tr dataset.\r\n\r\nThis PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding.\r\n\r\nFix #3186.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4970","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4970","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4970.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4970.patch","merged_at":1662972188000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4969","id":1369334740,"node_id":"PR_kwDODunzps4-wPOk","number":4969,"title":"Fix data URL and metadata of vivos dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662963154000,"updated_at":1662966975000,"closed_at":1662966859000,"author_association":"MEMBER","active_lock_reason":null,"body":"After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https:\/\/doi.org\/10.5281\/zenodo.7068130\r\n\r\nThis PR updates their data URL and some metadata (homepage, citation and license).\r\n\r\nFix #4936.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4969","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4969","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4969.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4969.patch","merged_at":1662966859000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4968","id":1369312877,"node_id":"PR_kwDODunzps4-wKkw","number":4968,"title":"Support streaming compguesswhat dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662961344000,"updated_at":1662969606000,"closed_at":1662969486000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming `compguesswhat` dataset.\r\n\r\nFix #3191.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4968","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4968","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4968.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4968.patch","merged_at":1662969486000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4967","id":1369092452,"node_id":"PR_kwDODunzps4-vbS-","number":4967,"title":"Strip \"\/\" in local dataset path to avoid empty dataset name error","user":{"login":"apohllo","id":40543,"node_id":"MDQ6VXNlcjQwNTQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40543?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/apohllo","html_url":"https:\/\/github.com\/apohllo","followers_url":"https:\/\/api.github.com\/users\/apohllo\/followers","following_url":"https:\/\/api.github.com\/users\/apohllo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/apohllo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/apohllo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/apohllo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/apohllo\/orgs","repos_url":"https:\/\/api.github.com\/users\/apohllo\/repos","events_url":"https:\/\/api.github.com\/users\/apohllo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/apohllo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662937756000,"updated_at":1662996778000,"closed_at":1662996638000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4967","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4967","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4967.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4967.patch","merged_at":1662996638000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4965","id":1368661002,"node_id":"I_kwDODunzps5RlBwK","number":4965,"title":"[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()","user":{"login":"hoangtnm","id":35718590,"node_id":"MDQ6VXNlcjM1NzE4NTkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35718590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hoangtnm","html_url":"https:\/\/github.com\/hoangtnm","followers_url":"https:\/\/api.github.com\/users\/hoangtnm\/followers","following_url":"https:\/\/api.github.com\/users\/hoangtnm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hoangtnm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hoangtnm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hoangtnm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hoangtnm\/orgs","repos_url":"https:\/\/api.github.com\/users\/hoangtnm\/repos","events_url":"https:\/\/api.github.com\/users\/hoangtnm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hoangtnm\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.","Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?"],"created_at":1662825349000,"updated_at":1663426281000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI'm trying to run `cast_column(\"audio\", Audio())` on Apple M1 Pro, but it seems that it doesn't work.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\n\r\ndataset = load_dataset(\"csv\", data_files=\".\/train.csv\")[\"train\"]\r\ndataset = dataset.map(lambda x: {\"audio\": str(DATA_DIR \/ \"audio\" \/ x[\"audio\"])})\r\ndataset = dataset.cast_column(\"audio\", Audio())\r\ndataset[0]\r\n```\r\n\r\n## Expected results\r\n```\r\n{'audio': {'bytes': None,\r\n 'path': '\/root\/.cache\/huggingface\/datasets\/downloads\/extracted\/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c\/en-US~JOINT_ACCOUNT\/602ba55abb1e6d0fbce92065.wav'},\r\n 'english_transcription': 'I would like to set up a joint account with my partner',\r\n 'intent_class': 11,\r\n 'lang_id': 4,\r\n 'path': '\/root\/.cache\/huggingface\/datasets\/downloads\/extracted\/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c\/en-US~JOINT_ACCOUNT\/602ba55abb1e6d0fbce92065.wav',\r\n 'transcription': 'I would like to set up a joint account with my partner'}\r\n```\r\n\r\n\r\n## Actual results\r\n````---------------------------------------------------------------------------\r\nMemoryError Traceback (most recent call last)\r\nInput In [6], in ()\r\n----> 1 dataset[0]\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2165, in Dataset.__getitem__(self, key)\r\n 2163 def __getitem__(self, key): # noqa: F811\r\n 2164 \"\"\"Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).\"\"\"\r\n-> 2165 return self._getitem(\r\n 2166 key,\r\n 2167 )\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs)\r\n 2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)\r\n 2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n-> 2150 formatted_output = format_table(\r\n 2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns\r\n 2152 )\r\n 2153 return formatted_output\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)\r\n 530 python_formatter = PythonFormatter(features=None)\r\n 531 if format_columns is None:\r\n--> 532 return formatter(pa_table, query_type=query_type)\r\n 533 elif query_type == \"column\":\r\n 534 if key in format_columns:\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)\r\n 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:\r\n 280 if query_type == \"row\":\r\n--> 281 return self.format_row(pa_table)\r\n 282 elif query_type == \"column\":\r\n 283 return self.format_column(pa_table)\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:312, in PythonFormatter.format_row(self, pa_table)\r\n 310 row = self.python_arrow_extractor().extract_row(pa_table)\r\n 311 if self.decoded:\r\n--> 312 row = self.python_features_decoder.decode_row(row)\r\n 313 return row\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row)\r\n 220 def decode_row(self, row: dict) -> dict:\r\n--> 221 return self.features.decode_example(row) if self.features else row\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/features.py:1647, in Features.decode_example(self, example, token_per_repo_id)\r\n 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):\r\n 1635 \"\"\"Decode example with custom feature decoding.\r\n 1636 \r\n 1637 Args:\r\n (...)\r\n 1644 :obj:`dict[str, Any]`\r\n 1645 \"\"\"\r\n-> 1647 return {\r\n 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)\r\n 1649 if self._column_requires_decoding[column_name]\r\n 1650 else value\r\n 1651 for column_name, (feature, value) in zip_dict(\r\n 1652 {key: value for key, value in self.items() if key in example}, example\r\n 1653 )\r\n 1654 }\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/features.py:1648, in (.0)\r\n 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):\r\n 1635 \"\"\"Decode example with custom feature decoding.\r\n 1636 \r\n 1637 Args:\r\n (...)\r\n 1644 :obj:`dict[str, Any]`\r\n 1645 \"\"\"\r\n 1647 return {\r\n-> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)\r\n 1649 if self._column_requires_decoding[column_name]\r\n 1650 else value\r\n 1651 for column_name, (feature, value) in zip_dict(\r\n 1652 {key: value for key, value in self.items() if key in example}, example\r\n 1653 )\r\n 1654 }\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id)\r\n 1257 # Object with special decoding:\r\n 1258 elif isinstance(schema, (Audio, Image)):\r\n 1259 # we pass the token to read and decode files from private repositories in streaming mode\r\n-> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None\r\n 1261 return obj\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id)\r\n 154 array, sampling_rate = self._decode_non_mp3_file_like(file)\r\n 155 else:\r\n--> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)\r\n 157 return {\"path\": path, \"array\": array, \"sampling_rate\": sampling_rate}\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id)\r\n 254 use_auth_token = None\r\n 256 with xopen(path, \"rb\", use_auth_token=use_auth_token) as f:\r\n--> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)\r\n 258 return array, sampling_rate\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/librosa\/util\/decorators.py:88, in deprecate_positional_args.._inner_deprecate_positional_args..inner_f(*args, **kwargs)\r\n 86 extra_args = len(args) - len(all_args)\r\n 87 if extra_args <= 0:\r\n---> 88 return f(*args, **kwargs)\r\n 90 # extra_args > 0\r\n 91 args_msg = [\r\n 92 \"{}={}\".format(name, arg)\r\n 93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:])\r\n 94 ]\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/librosa\/core\/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type)\r\n 161 else:\r\n 162 # Otherwise try soundfile first, and then fall back if necessary\r\n 163 try:\r\n--> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype)\r\n 166 except RuntimeError as exc:\r\n 167 # If soundfile failed, try audioread instead\r\n 168 if isinstance(path, (str, pathlib.PurePath)):\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/librosa\/core\/audio.py:195, in __soundfile_load(path, offset, duration, dtype)\r\n 192 context = path\r\n 193 else:\r\n 194 # Otherwise, create the soundfile object\r\n--> 195 context = sf.SoundFile(path)\r\n 197 with context as sf_desc:\r\n 198 sr_native = sf_desc.samplerate\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)\r\n 626 self._mode = mode\r\n 627 self._info = _create_info_struct(file, mode, samplerate, channels,\r\n 628 format, subtype, endian)\r\n--> 629 self._file = self._open(file, mode_int, closefd)\r\n 630 if set(mode).issuperset('r+') and self.seekable():\r\n 631 # Move write position to 0 (like in Python file objects)\r\n 632 self.seek(0)\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd)\r\n 1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd)\r\n 1178 elif _has_virtual_io_attrs(file, mode_int):\r\n-> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file),\r\n 1180 mode_int, self._info, _ffi.NULL)\r\n 1181 else:\r\n 1182 raise TypeError(\"Invalid file: {0!r}\".format(self.name))\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/soundfile.py:1197, in SoundFile._init_virtual_io(self, file)\r\n 1194 def _init_virtual_io(self, file):\r\n 1195 \"\"\"Initialize callback functions for sf_open_virtual().\"\"\"\r\n 1196 @_ffi.callback(\"sf_vio_get_filelen\")\r\n-> 1197 def vio_get_filelen(user_data):\r\n 1198 curr = file.tell()\r\n 1199 file.seek(0, SEEK_END)\r\n\r\nMemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https:\/\/cffi.readthedocs.io\/en\/latest\/using.html#callbacks\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.0\r\n- Platform: macOS-12.5.1-arm64-arm-64bit\r\n- Python version: 3.8.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.4","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4964","id":1368617322,"node_id":"I_kwDODunzps5Rk3Fq","number":4964,"title":"Column of arrays (2D+) are using unreasonably high memory","user":{"login":"vigsterkr","id":30353,"node_id":"MDQ6VXNlcjMwMzUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30353?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vigsterkr","html_url":"https:\/\/github.com\/vigsterkr","followers_url":"https:\/\/api.github.com\/users\/vigsterkr\/followers","following_url":"https:\/\/api.github.com\/users\/vigsterkr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vigsterkr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vigsterkr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vigsterkr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vigsterkr\/orgs","repos_url":"https:\/\/api.github.com\/users\/vigsterkr\/repos","events_url":"https:\/\/api.github.com\/users\/vigsterkr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vigsterkr\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above."],"created_at":1662815242000,"updated_at":1662815297000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import Dataset, Features, Array2D, Array3D\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype=\"float64\")}))\r\n```\r\n\r\nthe code above will use about 10Gb of RAM while constructing the `dataset` object.\r\n\r\nThe code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column.\r\n```python\r\nfrom datasets import Dataset\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data})\r\ndataset[column_name]\r\n```\r\n\r\n## Expected results\r\nSome memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening.\r\n\r\n## Actual results\r\nEnormous memory- and runtime overhead.\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: macOS-12.5.1-arm64-arm-64bit\r\n- Python version: 3.8.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.4","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4963","id":1368201188,"node_id":"I_kwDODunzps5RjRfk","number":4963,"title":"Dataset without script does not support regular JSON data file","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. "],"created_at":1662749133000,"updated_at":1662971727000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/julien-c\/label-studio-my-dogs\n\n### Description\n\n\"image\"\r\n\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4962","id":1368155365,"node_id":"PR_kwDODunzps4-sh-o","number":4962,"title":"Update setup.py","user":{"login":"DCNemesis","id":3616964,"node_id":"MDQ6VXNlcjM2MTY5NjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616964?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DCNemesis","html_url":"https:\/\/github.com\/DCNemesis","followers_url":"https:\/\/api.github.com\/users\/DCNemesis\/followers","following_url":"https:\/\/api.github.com\/users\/DCNemesis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DCNemesis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DCNemesis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DCNemesis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DCNemesis\/orgs","repos_url":"https:\/\/api.github.com\/users\/DCNemesis\/repos","events_url":"https:\/\/api.github.com\/users\/DCNemesis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DCNemesis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Before addressing this PR, we should be sure about the issue. See my comment in:\r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/4961#issuecomment-1243376247","Once we know 2022.8.2 works, I'm closing this PR, as the corresponding issue."],"created_at":1662746276000,"updated_at":1662993184000,"closed_at":1662993184000,"author_association":"NONE","active_lock_reason":null,"body":"exclude broken version of fsspec. See the [related issue](https:\/\/github.com\/huggingface\/datasets\/issues\/4961)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4962","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4962","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4962.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4962.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4961","id":1368124033,"node_id":"I_kwDODunzps5Ri-qB","number":4961,"title":"fsspec 2022.8.2 breaks xopen in streaming mode","user":{"login":"DCNemesis","id":3616964,"node_id":"MDQ6VXNlcjM2MTY5NjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616964?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DCNemesis","html_url":"https:\/\/github.com\/DCNemesis","followers_url":"https:\/\/api.github.com\/users\/DCNemesis\/followers","following_url":"https:\/\/api.github.com\/users\/DCNemesis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DCNemesis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DCNemesis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DCNemesis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DCNemesis\/orgs","repos_url":"https:\/\/api.github.com\/users\/DCNemesis\/repos","events_url":"https:\/\/api.github.com\/users\/DCNemesis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DCNemesis\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.","Opened [PR](https:\/\/github.com\/huggingface\/datasets\/pull\/4962) to address this.","Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https:\/\/github.com\/huggingface\/transformers\/pull\/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n","@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https:\/\/github.com\/googlecolab\/colabtools\/issues\/3055) on their end too. ","Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.","Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https:\/\/github.com\/googlecolab\/colabtools\/issues\/3055#issuecomment-1244019010"],"created_at":1662744415000,"updated_at":1663004750000,"closed_at":1662993125000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n\r\nimport datasets\r\n\r\ndata = datasets.load_dataset('MLCommons\/ml_spoken_words', 'id_wav', split='train', streaming=True)\r\n\r\n```\r\n\r\n## Expected results\r\nDataset should load as iterator.\r\n\r\n## Actual results\r\n```\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1737 # Return iterable dataset in case of streaming\r\n 1738 if streaming:\r\n-> 1739 return builder_instance.as_streaming_dataset(split=split)\r\n 1740 \r\n 1741 # Some datasets are already processed on the HF google storage\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py](https:\/\/localhost:8080\/#) in as_streaming_dataset(self, split, base_path)\r\n 1023 )\r\n 1024 self._check_manual_download(dl_manager)\r\n-> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1026 # By default, return all splits\r\n 1027 if split is None:\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MLCommons--ml_spoken_words\/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b\/ml_spoken_words.py](https:\/\/localhost:8080\/#) in _split_generators(self, dl_manager)\r\n 182 name=datasets.Split.TRAIN,\r\n 183 gen_kwargs={\r\n--> 184 \"audio_archives\": [download_audio(split=\"train\", lang=lang) for lang in self.config.languages],\r\n 185 \"local_audio_archives_paths\": [download_extract_audio(split=\"train\", lang=lang) for lang in\r\n 186 self.config.languages] if not dl_manager.is_streaming else None,\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MLCommons--ml_spoken_words\/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b\/ml_spoken_words.py](https:\/\/localhost:8080\/#) in (.0)\r\n 182 name=datasets.Split.TRAIN,\r\n 183 gen_kwargs={\r\n--> 184 \"audio_archives\": [download_audio(split=\"train\", lang=lang) for lang in self.config.languages],\r\n 185 \"local_audio_archives_paths\": [download_extract_audio(split=\"train\", lang=lang) for lang in\r\n 186 self.config.languages] if not dl_manager.is_streaming else None,\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MLCommons--ml_spoken_words\/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b\/ml_spoken_words.py](https:\/\/localhost:8080\/#) in _download_audio_archives(dl_manager, lang, format, split)\r\n 267 # for streaming case\r\n 268 def _download_audio_archives(dl_manager, lang, format, split):\r\n--> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split)\r\n 270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths]\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MLCommons--ml_spoken_words\/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b\/ml_spoken_words.py](https:\/\/localhost:8080\/#) in _download_audio_archives_paths(dl_manager, lang, format, split)\r\n 251 n_files_path = dl_manager.download(n_files_url)\r\n 252 \r\n--> 253 with open(n_files_path, \"r\", encoding=\"utf-8\") as file:\r\n 254 n_files = int(file.read().strip()) # the file contains a number of archives\r\n 255 \r\n\r\nValueError: I\/O operation on closed file.\r\n```\r\n\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4960","id":1368035159,"node_id":"I_kwDODunzps5Rio9X","number":4960,"title":"BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema'","user":{"login":"DSLituiev","id":8426290,"node_id":"MDQ6VXNlcjg0MjYyOTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8426290?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DSLituiev","html_url":"https:\/\/github.com\/DSLituiev","followers_url":"https:\/\/api.github.com\/users\/DSLituiev\/followers","following_url":"https:\/\/api.github.com\/users\/DSLituiev\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DSLituiev\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DSLituiev\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DSLituiev\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DSLituiev\/orgs","repos_url":"https:\/\/api.github.com\/users\/DSLituiev\/repos","events_url":"https:\/\/api.github.com\/users\/DSLituiev\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DSLituiev\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Following worked:\r\n\r\n```\r\ndata_dir = \"\/Users\/dlituiev\/repos\/datasets\/bioasq\/\"\r\nbioasq_task_b = load_dataset(\"aps\/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioasq_9b_source`); how can this be generalized to other datasets?\r\n- providing an actionable error message that lists available `name` values? I only got available `name` values once I've provided something there (`name=\"aps\/bioasq_task_b\"`), before it would not even mention that it requires `name` argument","Hi ! In general the list of available configurations is prompted. I think this is an issue with this specific dataset.\r\n\r\nFeel free to open a new discussions at https:\/\/huggingface.co\/datasets\/aps\/bioasq_task_b\/discussions\r\n\r\ncc @apsdehal\r\n\r\nIn particular it sounds like the `BUILDER_CONFIG_CLASS= BigBioConfig ` class attribute is missing and the _info should account for schema being None and raise an error"],"created_at":1662739603000,"updated_at":1663059063000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI am trying to load a dataset from drive and running into an error. \r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndata_dir = \"\/Users\/dlituiev\/repos\/datasets\/bioasq\/BioASQ-training9b\"\r\nbioasq_task_b = load_dataset(\"aps\/bioasq_task_b\", data_dir=data_dir)\r\n```\r\n\r\n## Actual results\r\n\r\n`AttributeError: 'BuilderConfig' object has no attribute 'schema'`\r\n\r\n
\r\n\r\n```\r\nUsing custom data configuration default-a1ca3e05be5abf2f\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nInput In [8], in ()\r\n 1 data_dir = \"\/Users\/dlituiev\/repos\/datasets\/bioasq\/BioASQ-training9b\"\r\n----> 2 bioasq_task_b = load_dataset(\"aps\/bioasq_task_b\", data_dir=data_dir)\r\n\r\nFile ~\/opt\/anaconda3\/envs\/spacy3\/lib\/python3.10\/site-packages\/datasets\/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1720 ignore_verifications = ignore_verifications or save_infos\r\n 1722 # Create a dataset builder\r\n-> 1723 builder_instance = load_dataset_builder(\r\n 1724 path=path,\r\n 1725 name=name,\r\n 1726 data_dir=data_dir,\r\n 1727 data_files=data_files,\r\n 1728 cache_dir=cache_dir,\r\n 1729 features=features,\r\n 1730 download_config=download_config,\r\n 1731 download_mode=download_mode,\r\n 1732 revision=revision,\r\n 1733 use_auth_token=use_auth_token,\r\n 1734 **config_kwargs,\r\n 1735 )\r\n 1737 # Return iterable dataset in case of streaming\r\n 1738 if streaming:\r\n\r\nFile ~\/opt\/anaconda3\/envs\/spacy3\/lib\/python3.10\/site-packages\/datasets\/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1523 raise ValueError(error_msg)\r\n 1525 # Instantiate the dataset builder\r\n-> 1526 builder_instance: DatasetBuilder = builder_cls(\r\n 1527 cache_dir=cache_dir,\r\n 1528 config_name=config_name,\r\n 1529 data_dir=data_dir,\r\n 1530 data_files=data_files,\r\n 1531 hash=hash,\r\n 1532 features=features,\r\n 1533 use_auth_token=use_auth_token,\r\n 1534 **builder_kwargs,\r\n 1535 **config_kwargs,\r\n 1536 )\r\n 1538 return builder_instance\r\n\r\nFile ~\/opt\/anaconda3\/envs\/spacy3\/lib\/python3.10\/site-packages\/datasets\/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)\r\n 1153 def __init__(self, *args, writer_batch_size=None, **kwargs):\r\n-> 1154 super().__init__(*args, **kwargs)\r\n 1155 # Batch size used by the ArrowWriter\r\n 1156 # It defines the number of samples that are kept in memory before writing them\r\n 1157 # and also the length of the arrow chunks\r\n 1158 # None means that the ArrowWriter will use its default value\r\n 1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE\r\n\r\nFile ~\/opt\/anaconda3\/envs\/spacy3\/lib\/python3.10\/site-packages\/datasets\/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)\r\n 305 if info is None:\r\n 306 info = self.get_exported_dataset_info()\r\n--> 307 info.update(self._info())\r\n 308 info.builder_name = self.name\r\n 309 info.config_name = self.config.name\r\n\r\nFile ~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/aps--bioasq_task_b\/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e\/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self)\r\n 474 def _info(self):\r\n 475 \r\n 476 # BioASQ Task B source schema\r\n--> 477 if self.config.schema == \"source\":\r\n 478 features = datasets.Features(\r\n 479 {\r\n 480 \"id\": datasets.Value(\"string\"),\r\n (...)\r\n 504 }\r\n 505 )\r\n 506 # simplified schema for QA tasks\r\n\r\nAttributeError: 'BuilderConfig' object has no attribute 'schema'\r\n```\r\n\r\n<\/details>\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.10.4\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4959","id":1367924429,"node_id":"PR_kwDODunzps4-rx6l","number":4959,"title":"Fix data URLs of compguesswhat dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662734170000,"updated_at":1662739294000,"closed_at":1662739144000,"author_association":"MEMBER","active_lock_reason":null,"body":"After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them:\r\n- https:\/\/github.com\/CompGuessWhat\/compguesswhat.github.io\/issues\/1\r\n\r\nThis PR updates their data URLs in our loading script.\r\n\r\nRelated to:\r\n- #3191","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4959","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4959","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4959.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4959.patch","merged_at":1662739144000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4958","id":1367695376,"node_id":"I_kwDODunzps5RhWAQ","number":4958,"title":"ConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/2.4.0\/datasets\/jsonl\/jsonl.py","user":{"login":"hasakikiki","id":66322047,"node_id":"MDQ6VXNlcjY2MzIyMDQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66322047?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hasakikiki","html_url":"https:\/\/github.com\/hasakikiki","followers_url":"https:\/\/api.github.com\/users\/hasakikiki\/followers","following_url":"https:\/\/api.github.com\/users\/hasakikiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hasakikiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hasakikiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hasakikiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hasakikiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/hasakikiki\/repos","events_url":"https:\/\/api.github.com\/users\/hasakikiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hasakikiki\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have solved this problem... The extension of the file should be `.json` not `.jsonl`"],"created_at":1662722995000,"updated_at":1662723524000,"closed_at":1662723524000,"author_association":"NONE","active_lock_reason":null,"body":"Hi,\r\nWhen I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version.\r\n\r\n```\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/2.3.0\/datasets\/jsonl\/jsonl.py (ConnectionError(MaxRetryError(\"HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: \/huggingface\/datasets\/2.3.0\/datasets\/jsonl\/jsonl.py (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 101] Network is unreachable'))\")))\r\n\r\n```\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4957","id":1366532849,"node_id":"PR_kwDODunzps4-nGIk","number":4957,"title":"Add `Dataset.from_generator`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I restarted the builder PR job just in case","_The documentation is not available anymore as the PR was closed or merged._","CI is now green. https:\/\/github.com\/huggingface\/doc-builder\/pull\/296 explains why it failed."],"created_at":1662649705000,"updated_at":1663339595000,"closed_at":1663339458000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism.\r\n\r\nCloses https:\/\/github.com\/huggingface\/datasets\/issues\/4417","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4957","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4957","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4957.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4957.patch","merged_at":1663339458000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4956","id":1366475160,"node_id":"PR_kwDODunzps4-m5NU","number":4956,"title":"Fix TF tests for 2.10","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662647950000,"updated_at":1662650211000,"closed_at":1662650084000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fixes #4953","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4956","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4956","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4956.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4956.patch","merged_at":1662650084000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4955","id":1366382314,"node_id":"I_kwDODunzps5RcVbq","number":4955,"title":"Raise a more precise error when the URL is unreachable in streaming mode","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662645157000,"updated_at":1662645216000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"See for example:\r\n\r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/3191\r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/3186\r\n\r\nIt would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently:\r\n\r\n- https:\/\/huggingface.co\/datasets\/compguesswhat\r\n\r\n \"Capture\r\n\r\n- https:\/\/huggingface.co\/datasets\/nli_tr\r\n\r\n \"Capture\r\n\r\n\r\ncc @albertvillanova ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4954","id":1366369682,"node_id":"PR_kwDODunzps4-mhl5","number":4954,"title":"Pin TensorFlow temporarily","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662644775000,"updated_at":1662646353000,"closed_at":1662646203000,"author_association":"MEMBER","active_lock_reason":null,"body":"Temporarily fix TensorFlow until a permanent solution is found.\r\n\r\nRelated to:\r\n- #4953","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4954","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4954","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4954.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4954.patch","merged_at":1662646203000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4953","id":1366356514,"node_id":"I_kwDODunzps5RcPIi","number":4953,"title":"CI test of TensorFlow is failing","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662644369000,"updated_at":1662650085000,"closed_at":1662650085000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nThe following CI test fails: https:\/\/github.com\/huggingface\/datasets\/runs\/8246722693?check_suite_focus=true\r\n```\r\nFAILED tests\/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError:\r\n```\r\n\r\nDetails:\r\n```\r\n_________________________ TempSeedTest.test_tensorflow _________________________\r\n[gw0] linux -- Python 3.7.13 \/opt\/hostedtoolcache\/Python\/3.7.13\/x64\/bin\/python\r\n\r\nself = \r\n\r\n @require_tf\r\n def test_tensorflow(self):\r\n import tensorflow as tf\r\n from tensorflow.keras import layers\r\n \r\n def gen_random_output():\r\n model = layers.Dense(2)\r\n x = tf.random.uniform((1, 3))\r\n return model(x).numpy()\r\n \r\n with temp_seed(42, set_tensorflow=True):\r\n out1 = gen_random_output()\r\n with temp_seed(42, set_tensorflow=True):\r\n out2 = gen_random_output()\r\n out3 = gen_random_output()\r\n \r\n> np.testing.assert_equal(out1, out2)\r\nE AssertionError: \r\nE Arrays are not equal\r\nE \r\nE Mismatched elements: 2 \/ 2 (100%)\r\nE Max absolute difference: 0.84619296\r\nE Max relative difference: 16.083529\r\nE x: array([[-0.793581, 0.333286]], dtype=float32)\r\nE y: array([[0.052612, 0.539708]], dtype=float32)\r\n\r\ntests\/test_py_utils.py:149: AssertionError\r\n```\r\n\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4952","id":1366354604,"node_id":"PR_kwDODunzps4-meM0","number":4952,"title":"Add test-datasets CI job","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Closing this one since the dataset scripts will be removed in https:\/\/github.com\/huggingface\/datasets\/pull\/4974"],"created_at":1662644310000,"updated_at":1663334882000,"closed_at":1663334748000,"author_association":"MEMBER","active_lock_reason":null,"body":"To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog\r\n\r\ntest does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts\r\n\r\nThis also makes `pip install -e .[dev]` much smaller for developers\r\n\r\nWDYT @albertvillanova ?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4952","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4952","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4952.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4952.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4951","id":1365954814,"node_id":"PR_kwDODunzps4-lDqd","number":4951,"title":"Fix license information in qasc dataset card","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662631479000,"updated_at":1662648887000,"closed_at":1662648725000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0:\r\n- https:\/\/github.com\/allenai\/qasc\/issues\/5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4951","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4951","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4951.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4951.patch","merged_at":1662648725000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4950","id":1365458633,"node_id":"PR_kwDODunzps4-jWZ1","number":4950,"title":"Update Enwik8 broken link and information","user":{"login":"mtanghu","id":54819091,"node_id":"MDQ6VXNlcjU0ODE5MDkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54819091?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mtanghu","html_url":"https:\/\/github.com\/mtanghu","followers_url":"https:\/\/api.github.com\/users\/mtanghu\/followers","following_url":"https:\/\/api.github.com\/users\/mtanghu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mtanghu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mtanghu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mtanghu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mtanghu\/orgs","repos_url":"https:\/\/api.github.com\/users\/mtanghu\/repos","events_url":"https:\/\/api.github.com\/users\/mtanghu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mtanghu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662606900000,"updated_at":1662648810000,"closed_at":1662648660000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The current enwik8 dataset link give a 502 bad gateway error which can be view on https:\/\/huggingface.co\/datasets\/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4950","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4950","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4950.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4950.patch","merged_at":1662648660000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4949","id":1365251916,"node_id":"PR_kwDODunzps4-iqzI","number":4949,"title":"Update enwik8 fixing the broken link","user":{"login":"mtanghu","id":54819091,"node_id":"MDQ6VXNlcjU0ODE5MDkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54819091?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mtanghu","html_url":"https:\/\/github.com\/mtanghu","followers_url":"https:\/\/api.github.com\/users\/mtanghu\/followers","following_url":"https:\/\/api.github.com\/users\/mtanghu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mtanghu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mtanghu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mtanghu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mtanghu\/orgs","repos_url":"https:\/\/api.github.com\/users\/mtanghu\/repos","events_url":"https:\/\/api.github.com\/users\/mtanghu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mtanghu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing pull request to following contributing guidelines of making a new branch and will make a new pull request"],"created_at":1662589034000,"updated_at":1662606844000,"closed_at":1662606844000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The current enwik8 dataset link give a 502 bad gateway error which can be view on https:\/\/huggingface.co\/datasets\/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4949","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4949","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4949.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4949.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4948","id":1364973778,"node_id":"PR_kwDODunzps4-hwsl","number":4948,"title":"Fix minor typo in error message for missing imports","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662571251000,"updated_at":1662649171000,"closed_at":1662649035000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4948","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4948","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4948.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4948.patch","merged_at":1662649035000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4947","id":1364967957,"node_id":"PR_kwDODunzps4-hvbq","number":4947,"title":"Try to fix the Windows CI after TF update 2.10","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4947). All of your documentation changes will be reflected on that endpoint."],"created_at":1662570889000,"updated_at":1662628390000,"closed_at":1662628390000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4947","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4947","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4947.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4947.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4946","id":1364692069,"node_id":"PR_kwDODunzps4-g0Hz","number":4946,"title":"Introduce regex check when pushing as well","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Let me take over this PR if you don't mind"],"created_at":1662558358000,"updated_at":1663064341000,"closed_at":1663064194000,"author_association":"MEMBER","active_lock_reason":null,"body":"Closes https:\/\/github.com\/huggingface\/datasets\/issues\/4945 by adding a regex check when pushing to hub.\r\n\r\nLet me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4946","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4946","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4946.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4946.patch","merged_at":1663064194000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4945","id":1364691096,"node_id":"I_kwDODunzps5RV4iY","number":4945,"title":"Push to hub can push splits that do not respect the regex","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662558317000,"updated_at":1663064195000,"closed_at":1663064195000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nThe `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n>>> from datasets import Dataset, DatasetDict, load_dataset\r\n\r\n>>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]})\r\n>>> di = DatasetDict()\r\n>>> di['identifier-with-column'] = d\r\n\r\n>>> di.push_to_hub('open-source-metrics\/test')\r\nPushing split identifier-with-column to the Hub.\r\nPushing dataset shards to the dataset hub: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:04<00:00, 4.40s\/it]\r\n```\r\n\r\nLoading it afterwards:\r\n```python\r\n>>> load_dataset('open-source-metrics\/test')\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 610\/610 [00:00<00:00, 432kB\/s]\r\nUsing custom data configuration open-source-metrics--test-28b63ec7cde80488\r\nDownloading and preparing dataset None\/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to \/home\/lysandre\/.cache\/huggingface\/datasets\/open-source-metrics___parquet\/open-source-metrics--test-28b63ec7cde80488\/0.0.0\/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...\r\nDownloading data files: 0%| | 0\/1 [00:00\", line 1, in \r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/load.py\", line 1746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 771, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/packaged_modules\/parquet\/parquet.py\", line 48, in _split_generators\r\n splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n File \"\", line 5, in __init__\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/splits.py\", line 599, in __post_init__\r\n NamedSplit(self.name) # check that it's a valid split name\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/splits.py\", line 346, in __init__\r\n raise ValueError(f\"Split name should match '{_split_re}' but got '{split_name}'.\")\r\nValueError: Split name should match '^\\w+(\\.\\w+)*$' but got 'identifier-with-column'.\r\n```\r\n\r\n## Expected results\r\n\r\nI would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards.\r\n\r\n## Actual results\r\n\r\nSee above\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36\r\n- Python version: 3.10.6\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.4\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4944","id":1364313569,"node_id":"I_kwDODunzps5RUcXh","number":4944,"title":"larger dataset, larger GPU memory in the training phase? Is that correct?","user":{"login":"debby1103","id":38886373,"node_id":"MDQ6VXNlcjM4ODg2Mzcz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38886373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/debby1103","html_url":"https:\/\/github.com\/debby1103","followers_url":"https:\/\/api.github.com\/users\/debby1103\/followers","following_url":"https:\/\/api.github.com\/users\/debby1103\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/debby1103\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/debby1103\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/debby1103\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/debby1103\/orgs","repos_url":"https:\/\/api.github.com\/users\/debby1103\/repos","events_url":"https:\/\/api.github.com\/users\/debby1103\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/debby1103\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["does the trainer save it in GPU? sooo curious... how to fix it","It's my bad. didn't limit the input length"],"created_at":1662540390000,"updated_at":1662554098000,"closed_at":1662554098000,"author_association":"NONE","active_lock_reason":null,"body":" from datasets import set_caching_enabled\r\n set_caching_enabled(False)\r\n for ds_name in [\"squad\",\"newsqa\",\"nqopen\",\"narrativeqa\"]:\r\n train_ds = load_from_disk(\"..\/..\/..\/dall\/downstream\/processedproqa\/{}-train.hf\".format(ds_name))\r\n\r\n break\r\n train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1\r\n\r\n\r\n trainer = QuestionAnsweringTrainer( #huggingface trainer\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_ds,\r\n eval_dataset= None,\r\n eval_examples=None,\r\n answer_column_name=answer_column,\r\n dataset_name=\"squad\",\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics if training_args.predict_with_generate else None,\r\n )\r\n\r\nwith operation 1, the GPU memory increases from 16G to 23G","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4943","id":1363967650,"node_id":"PR_kwDODunzps4-eZd_","number":4943,"title":"Add splits to MBPP dataset","user":{"login":"cwarny","id":2788526,"node_id":"MDQ6VXNlcjI3ODg1MjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2788526?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cwarny","html_url":"https:\/\/github.com\/cwarny","followers_url":"https:\/\/api.github.com\/users\/cwarny\/followers","following_url":"https:\/\/api.github.com\/users\/cwarny\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cwarny\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cwarny\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cwarny\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cwarny\/orgs","repos_url":"https:\/\/api.github.com\/users\/cwarny\/repos","events_url":"https:\/\/api.github.com\/users\/cwarny\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cwarny\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["```\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mbpp\r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0\r\nrootdir: \/Users\/cwarny\/datasets, configfile: setup.cfg\r\ncollected 1 item \r\n\r\ntests\/test_dataset_common.py . [100%]\r\n\r\n================================================================================================= 1 passed in 1.12s ==================================================================================================\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_mbpp \r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0\r\nrootdir: \/Users\/cwarny\/datasets, configfile: setup.cfg\r\ncollected 1 item \r\n\r\ntests\/test_dataset_common.py . [100%]\r\n\r\n================================================================================================= 1 passed in 0.35s ==================================================================================================\r\n\r\n```","_The documentation is not available anymore as the PR was closed or merged._","Hi @cwarny ! Thanks for adding the correct splits :)\r\n\r\nYou can fix the CI error by running `make style` - this should reformat the dataset script","done"],"created_at":1662513511000,"updated_at":1663072159000,"closed_at":1663072041000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR addresses https:\/\/github.com\/huggingface\/datasets\/issues\/4795","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4943","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4943","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4943.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4943.patch","merged_at":1663072041000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4942","id":1363869421,"node_id":"I_kwDODunzps5RSv7t","number":4942,"title":"Trec Dataset has incorrect labels","user":{"login":"wmpauli","id":6539145,"node_id":"MDQ6VXNlcjY1MzkxNDU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6539145?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wmpauli","html_url":"https:\/\/github.com\/wmpauli","followers_url":"https:\/\/api.github.com\/users\/wmpauli\/followers","following_url":"https:\/\/api.github.com\/users\/wmpauli\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wmpauli\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wmpauli\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wmpauli\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wmpauli\/orgs","repos_url":"https:\/\/api.github.com\/users\/wmpauli\/repos","events_url":"https:\/\/api.github.com\/users\/wmpauli\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wmpauli\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @wmpauli. \r\n\r\nIndeed we recently fixed this issue:\r\n- #4801 \r\n\r\nThe fix will be accessible after our next library release. In the meantime, you can have it by passing `revision=\"main\"` to `load_dataset`."],"created_at":1662502420000,"updated_at":1662635523000,"closed_at":1662635523000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nBoth coarse and fine labels seem to be out of line.\r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = \"trec\"\r\nraw_datasets = load_dataset(dataset)\r\ndf = pd.DataFrame(raw_datasets[\"test\"])\r\ndf.head()\r\n```\r\n\r\n## Expected results\r\ntext (string) | coarse_label (class label) | fine_label (class label)\r\n-- | -- | --\r\nHow far is it from Denver to Aspen ? | 5 \t(NUM) | 40 \t(NUM:dist)\r\nWhat county is Modesto , California in ? | 4 \t(LOC) | 32 \t(LOC:city)\r\nWho was Galileo ? | 3 \t(HUM) | 31 \t(HUM:desc)\r\nWhat is an atom ? | 2 \t(DESC) | 24 \t(DESC:def)\r\nWhen did Hawaii become a state ? | 5 \t(NUM) | 39 \t(NUM:date)\r\n\r\n## Actual results\r\n index | label-coarse |label-fine | text\r\n-- |-- | -- | --\r\n0 | 4 | 40 | How far is it from Denver to Aspen ?\r\n1 | 5 | 21 | What county is Modesto , California in ?\r\n2 | 3 | 12 | Who was Galileo ?\r\n3 | 0 | 7 | What is an atom ?\r\n4 | 4 | 8 | When did Hawaii become a state ?\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27\r\n- Python version: 3.9.13\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4941","id":1363622861,"node_id":"PR_kwDODunzps4-dQ9F","number":4941,"title":"Add Papers with Code ID to scifact dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662486397000,"updated_at":1662488897000,"closed_at":1662488761000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- adds Papers with Code ID\r\n- forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https:\/\/github.com\/huggingface\/datasets\/runs\/8200223631?check_suite_focus=true","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4941","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4941","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4941.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4941.patch","merged_at":1662488761000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4940","id":1363513058,"node_id":"PR_kwDODunzps4-c6WY","number":4940,"title":"Fix multilinguality tag and missing sections in xquad_r dataset card","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662480335000,"updated_at":1662977467000,"closed_at":1662977328000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR fixes issue reported on the Hub:\r\n- Label as multilingual: https:\/\/huggingface.co\/datasets\/xquad_r\/discussions\/1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4940","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4940","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4940.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4940.patch","merged_at":1662977328000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4939","id":1363468679,"node_id":"PR_kwDODunzps4-cw4A","number":4939,"title":"Fix NonMatchingChecksumError in adv_glue dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662478276000,"updated_at":1662486130000,"closed_at":1662485956000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix issue reported on the Hub: https:\/\/huggingface.co\/datasets\/adv_glue\/discussions\/1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4939","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4939","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4939.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4939.patch","merged_at":1662485956000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4938","id":1363429228,"node_id":"PR_kwDODunzps4-coaB","number":4938,"title":"Remove main branch rename notice","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662476585000,"updated_at":1662482771000,"closed_at":1662482633000,"author_association":"MEMBER","active_lock_reason":null,"body":"We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)\r\n\r\nI also unpinned the github issue about the branch renaming","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4938","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4938","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4938.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4938.patch","merged_at":1662482633000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4937","id":1363426946,"node_id":"PR_kwDODunzps4-cn6W","number":4937,"title":"Remove deprecated identical_ok","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662476484000,"updated_at":1662503049000,"closed_at":1662502917000,"author_association":"MEMBER","active_lock_reason":null,"body":"`huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed:\r\n\r\n```python\r\nArgs:\r\n...\r\n identical_ok (`bool`, *optional*, defaults to `True`):\r\n Deprecated: will be removed in 0.11.0.\r\n Changing this value has no effect.\r\n...\r\n```\r\n\r\nThere was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same.\r\n\r\ncc @mariosasko ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4937","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4937","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4937.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4937.patch","merged_at":1662502917000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4936","id":1363274907,"node_id":"I_kwDODunzps5RQeyb","number":4936,"title":"vivos (Vietnamese speech corpus) dataset not accessible","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https:\/\/huggingface.co\/datasets\/indonesian-nlp\/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)","@cahya-wirawan omg this is awesome!! thank you! ","We have contacted the authors to ask them."],"created_at":1662470275000,"updated_at":1662966860000,"closed_at":1662966860000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nVIVOS data is not accessible anymore, neither of these links work (at least from France):\r\n* https:\/\/ailab.hcmus.edu.vn\/assets\/vivos.tar.gz (data)\r\n* https:\/\/ailab.hcmus.edu.vn\/vivos (dataset page) \r\n\r\nTherefore `load_dataset` doesn't work.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nds = load_dataset(\"vivos\")\r\n```\r\n\r\n## Expected results\r\ndataset loaded\r\n\r\n## Actual results\r\n```\r\nConnectionError: Couldn't reach https:\/\/ailab.hcmus.edu.vn\/assets\/vivos.tar.gz (ConnectionError(MaxRetryError(\"HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: \/assets\/vivos.tar.gz (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -5] No address associated with hostname'))\")))\r\n```\r\n\r\nWill try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https:\/\/github.com\/huggingface\/datasets\/pull\/4872), because it's small and straightforward and uses tar archives. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4935","id":1363226736,"node_id":"I_kwDODunzps5RQTBw","number":4935,"title":"Dataset Viewer issue for ubuntu_dialogs_corpus","user":{"login":"CibinQuadance","id":87330568,"node_id":"MDQ6VXNlcjg3MzMwNTY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/87330568?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/CibinQuadance","html_url":"https:\/\/github.com\/CibinQuadance","followers_url":"https:\/\/api.github.com\/users\/CibinQuadance\/followers","following_url":"https:\/\/api.github.com\/users\/CibinQuadance\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/CibinQuadance\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/CibinQuadance\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/CibinQuadance\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/CibinQuadance\/orgs","repos_url":"https:\/\/api.github.com\/users\/CibinQuadance\/repos","events_url":"https:\/\/api.github.com\/users\/CibinQuadance\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/CibinQuadance\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["The dataset maintainers (https:\/\/huggingface.co\/datasets\/ubuntu_dialogs_corpus) decided to forbid the dataset from being downloaded automatically (https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/loading#manual-download), and the dataset viewer respects this.\r\nWe will try to improve the error display though. Thanks for reporting."],"created_at":1662468110000,"updated_at":1662468685000,"closed_at":1662468685000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4934","id":1363034253,"node_id":"I_kwDODunzps5RPkCN","number":4934,"title":"Dataset Viewer issue for indonesian-nlp\/librivox-indonesia","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["The error is not related to the dataset viewer. I'm having a look...","Thanks @albertvillanova for checking the issue. Actually, I can use the dataset like following:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> ds=load_dataset(\"indonesian-nlp\/librivox-indonesia\")\r\nNo config specified, defaulting to: librivox-indonesia\/all\r\nReusing dataset librivox-indonesia (\/root\/.cache\/huggingface\/datasets\/indonesian-nlp___librivox-indonesia\/all\/1.0.0\/9a934a42bfb53dc103003d191618443b8a786bea2bd7bb0bc2d9454b8494521e)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 500.87it\/s]\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['path', 'language', 'reader', 'sentence', 'audio'],\r\n num_rows: 7815\r\n })\r\n})\r\n>>> ds[\"train\"][0]\r\n{'path': '\/root\/.cache\/huggingface\/datasets\/downloads\/extracted\/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3\/librivox-indonesia\/sundanese\/universal-declaration-of-human-rights\/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': {'path': '\/root\/.cache\/huggingface\/datasets\/downloads\/extracted\/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3\/librivox-indonesia\/sundanese\/universal-declaration-of-human-rights\/human_rights_un_sun_brc_0000.mp3', 'array': array([ 0. , 0. , 0. , ..., -0.02419001,\r\n -0.01957154, -0.01502833], dtype=float32), 'sampling_rate': 44100}}\r\n\r\n```\r\nIt would be just nice if I also can see it using dataset viewer.","Yes, the issue arises when streaming (that is used by the viewer): your script does not support streaming and to support it in this case there are some subtleties that we are explaining better in our docs in a work-in progress pull request:\r\n- #4872\r\n\r\nJust note that when streaming, `local_extracted_archive` is None, and this code line generates the error:\r\n```python\r\nfilepath = local_extracted_archive + \"\/librivox-indonesia\/audio_transcription.csv\"\r\n```\r\n\r\nFor a proper implementation, you could have a look at: https:\/\/huggingface.co\/datasets\/common_voice\/blob\/main\/common_voice.py\r\n\r\nYou can test your script locally by passing `streaming=True` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"indonesian-nlp\/librivox-indonesia\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\n```","Great, I will have a look and update the script. Thanks.","Hi @albertvillanova , I just add the streaming functionality and it works in the first try :-) Thanks a lot!","Awesome!!! :hugs: "],"created_at":1662458603000,"updated_at":1662468400000,"closed_at":1662468400000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/indonesian-nlp\/librivox-indonesia\n\n### Description\n\nI created a new speech dataset https:\/\/huggingface.co\/datasets\/indonesian-nlp\/librivox-indonesia, but the dataset preview doesn't work with following error message:\r\n```\r\nServer error\r\nStatus code: 400\r\nException: TypeError\r\nMessage: unsupported operand type(s) for +: 'NoneType' and 'str'\r\n```\r\nPlease help, I am not sure what the problem here is. Thanks a lot.\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4933","id":1363013023,"node_id":"I_kwDODunzps5RPe2f","number":4933,"title":"Dataset\/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.","user":{"login":"tianjianjiang","id":4812544,"node_id":"MDQ6VXNlcjQ4MTI1NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4812544?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tianjianjiang","html_url":"https:\/\/github.com\/tianjianjiang","followers_url":"https:\/\/api.github.com\/users\/tianjianjiang\/followers","following_url":"https:\/\/api.github.com\/users\/tianjianjiang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tianjianjiang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tianjianjiang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tianjianjiang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tianjianjiang\/orgs","repos_url":"https:\/\/api.github.com\/users\/tianjianjiang\/repos","events_url":"https:\/\/api.github.com\/users\/tianjianjiang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tianjianjiang\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n\r\nIn your case, something like\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instead?\r\nds_mc4_ja_2020 = ds_mc4_ja.filter(\r\n lambda batch: [timestamp[:4] == \"2020\" for timestamp in batch[\"timestamp\"]],\r\n batched=True,\r\n)\r\n```\r\n\r\nLet me know if it helps !","> Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n> [...]\r\n> Let me know if it helps !\r\n\r\nHi @lhoestq,\r\n\r\nAh, my bad, I totally forgot that part...\r\nSorry for the trouble and thank you for the kind help!"],"created_at":1662457668000,"updated_at":1662464667000,"closed_at":1662464667000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n`Dataset\/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.\r\n\r\n## Steps to reproduce the bug\r\n(In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instead?\r\nds_mc4_ja_2020 = ds_mc4_ja.filter(\r\n lambda example: example[\"timestamp\"][:4] == \"2020\",\r\n batched=True,\r\n)\r\n```\r\n\r\n## Expected results\r\nNo error\r\n\r\n## Actual results\r\n```python\r\n---------------------------------------------------------------------------\r\nRemoteTraceback Traceback (most recent call last)\r\nRemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 557, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 524, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 480, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2779, in _map_single\r\n offset=offset,\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2655, in apply_function_on_filtered_inputs\r\n processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2347, in decorated\r\n result = f(decorated_item, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 4946, in get_indices_from_mask_function\r\n indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]\r\nTypeError: zip argument #2 must support iteration\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTypeError Traceback (most recent call last)\r\n\/tmp\/ipykernel_51348\/2345782281.py in \r\n 7 batched=True,\r\n 8 # batch_size=10_000,\r\n----> 9 num_proc=111,\r\n 10 )\r\n 11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter(\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)\r\n 878 desc=desc,\r\n 879 )\r\n--> 880 for k, dataset in self.items()\r\n 881 }\r\n 882 )\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in (.0)\r\n 878 desc=desc,\r\n 879 )\r\n--> 880 for k, dataset in self.items()\r\n 881 }\r\n 882 )\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 522 }\r\n 523 # apply actual function\r\n--> 524 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 525 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 526 # re-apply format to the output\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 478 # Call actual function\r\n 479 \r\n--> 480 out = func(self, *args, **kwargs)\r\n 481 \r\n 482 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 2920 new_fingerprint=new_fingerprint,\r\n 2921 input_columns=input_columns,\r\n-> 2922 desc=desc,\r\n 2923 )\r\n 2924 new_dataset = copy.deepcopy(self)\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 2498 \r\n 2499 for index, async_result in results.items():\r\n-> 2500 transformed_shards[index] = async_result.get()\r\n 2501 \r\n 2502 assert (\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/multiprocess\/pool.py in get(self, timeout)\r\n 655 return self._value\r\n 656 else:\r\n--> 657 raise self._value\r\n 658 \r\n 659 def _set(self, i, obj):\r\n\r\nTypeError: zip argument #2 must support iteration\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12\r\n- Python version: 3.7.12\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.3.5\r\n\r\n(I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4932","id":1362522423,"node_id":"I_kwDODunzps5RNnE3","number":4932,"title":"Dataset Viewer issue for bigscience-biomedical\/biosses","user":{"login":"galtay","id":663051,"node_id":"MDQ6VXNlcjY2MzA1MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/663051?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/galtay","html_url":"https:\/\/github.com\/galtay","followers_url":"https:\/\/api.github.com\/users\/galtay\/followers","following_url":"https:\/\/api.github.com\/users\/galtay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/galtay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/galtay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/galtay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/galtay\/orgs","repos_url":"https:\/\/api.github.com\/users\/galtay\/repos","events_url":"https:\/\/api.github.com\/users\/galtay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/galtay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Possibly not related to the dataset viewer in itself. cc @huggingface\/datasets.\r\n\r\nIn particular, I think that the import of bigbiohub is not working here: https:\/\/huggingface.co\/datasets\/bigscience-biomedical\/biosses\/blob\/main\/biosses.py#L29 (requires a relative path?)\r\n\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n>>> get_dataset_config_names('bigscience-biomedical\/biosses')\r\nDownloading builder script: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8.00k\/8.00k [00:00<00:00, 7.47MB\/s]\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 289, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1247, in dataset_module_factory\r\n raise e1 from None\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1220, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithScript(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 931, in get_module\r\n local_imports = _download_additional_modules(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 215, in _download_additional_modules\r\n raise ImportError(\r\nImportError: To be able to use bigscience-biomedical\/biosses, you need to install the following dependency: bigbiohub.\r\nPlease install it using 'pip install bigbiohub' for instance'\r\n```","Opened a PR here to (hopefully) fix the dataset script: https:\/\/huggingface.co\/datasets\/bigscience-biomedical\/biosses\/discussions\/1\/files","thanks for taking a look @severo . agree this isn't related to dataset viewer (sorry just clicked on the auto issue creator). also thanks @lhoestq , I see the format to use for relative imports. was a bit confused b\/c it seems to be working here \r\n\r\nhttps:\/\/huggingface.co\/datasets\/bigscience-biomedical\/scitail\/blob\/main\/scitail.py#L31\r\n\r\nI'll try this PR a see what happens. ","closing as I think the issue is relative imports and attempting to read json files directly in the repo (thanks again @lhoestq ) "],"created_at":1662417632000,"updated_at":1662474296000,"closed_at":1662474296000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/bigscience-biomedical\/biosses\n\n### Description\n\nI've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) . \r\n```\r\nStatus code: 400\r\nException: ModuleNotFoundError\r\nMessage: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub'\r\n```\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4931","id":1362298764,"node_id":"PR_kwDODunzps4-Y3L6","number":4931,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662397384000,"updated_at":1662442880000,"closed_at":1662442769000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891\r\n- #4896\r\n- #4908\r\n- #4921","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4931","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4931","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4931.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4931.patch","merged_at":1662442769000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4930","id":1362193587,"node_id":"PR_kwDODunzps4-Yflc","number":4930,"title":"Add cc-by-nc-2.0 to list of licenses","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","this list needs to be kept in sync with the ones in moon-landing and hub-docs :)","@julien-c don't you think it might be better to a have a single file (source of truth) in one of the repos and then use it in every other repo, instead of having 3 copies of the same file that must be kept in sync?\r\n\r\nAlso note that the licenses we are adding were all already present in our previous `licenses.json` file: are we regenerating it, step by step? Why don't we use a file with ALL the licenses we previously had in the list?\r\n\r\nLicenses added:\r\n- #4887\r\n- #4930 \r\n\r\nPrevious `licenses.json` file:\r\n- https:\/\/github.com\/huggingface\/datasets\/blob\/b7612754928e0fd43b9e3c3becb906ec280ff5d4\/src\/datasets\/utils\/resources\/licenses.json\r\n- removed in this commit: https:\/\/github.com\/huggingface\/datasets\/pull\/4613\/commits\/9f7725412dac1089b3e057f9e3fcf39cc222bc26\r\n\r\nLet me know what you think and I can take care of this.","> Let me know what you think and I can take care of this.\r\n\r\nWhat I think is that we shouldn't add licenses that are just used in a couple of datasets, and just use `license_details` for this.\r\n\r\n> don't you think it might be better to a have a single file (source of truth) in one of the repos and then use it in every other repo, instead of having 3 copies of the same file that must be kept in sync?\r\n\r\nYes, in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now? \r\n","Feel free to delete the license list in `datasets` @albertvillanova ;)\r\n\r\nAlso FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)"],"created_at":1662392252000,"updated_at":1662482612000,"closed_at":1662397264000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https:\/\/github.com\/allenai\/scifact\/blob\/master\/LICENSE.md","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4930","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4930","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4930.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4930.patch","merged_at":1662397264000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4929","id":1361508366,"node_id":"PR_kwDODunzps4-WK2w","number":4929,"title":"Fixes a typo in loading documentation","user":{"login":"sighingnow","id":7144772,"node_id":"MDQ6VXNlcjcxNDQ3NzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7144772?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sighingnow","html_url":"https:\/\/github.com\/sighingnow","followers_url":"https:\/\/api.github.com\/users\/sighingnow\/followers","following_url":"https:\/\/api.github.com\/users\/sighingnow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sighingnow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sighingnow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sighingnow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sighingnow\/orgs","repos_url":"https:\/\/api.github.com\/users\/sighingnow\/repos","events_url":"https:\/\/api.github.com\/users\/sighingnow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sighingnow\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662362334000,"updated_at":1662430263000,"closed_at":1662383198000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"As show in the [documentation page](https:\/\/huggingface.co\/docs\/datasets\/loading) here the `\"tr\"in` should be `\"train`.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/7144772\/188390445-e1f04d54-e3e3-4762-8686-63ecbe4087e5.png)\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4929","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4929","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4929.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4929.patch","merged_at":1662383198000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4928","id":1360941172,"node_id":"PR_kwDODunzps4-Ubi4","number":4928,"title":"Add ability to read-write to SQL databases.","user":{"login":"Dref360","id":8976546,"node_id":"MDQ6VXNlcjg5NzY1NDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8976546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Dref360","html_url":"https:\/\/github.com\/Dref360","followers_url":"https:\/\/api.github.com\/users\/Dref360\/followers","following_url":"https:\/\/api.github.com\/users\/Dref360\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Dref360\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Dref360\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Dref360\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Dref360\/orgs","repos_url":"https:\/\/api.github.com\/users\/Dref360\/repos","events_url":"https:\/\/api.github.com\/users\/Dref360\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Dref360\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4928). All of your documentation changes will be reflected on that endpoint.","Ah CI runs with `pandas=1.3.5` which doesn't return the number of row inserted.","wow this is super cool!","@lhoestq I'm getting error in integration tests, not sure if it's related to my PR. Any help would be appreciated :) \r\n\r\n```\r\nif not self._is_valid_token(token):\r\n> raise ValueError(\"Invalid token passed!\")\r\nE ValueError: Invalid token passed!\r\n```","I just relaunched the tests, it should be fixed now","Thanks a lot for working on this!\r\n\r\nI have some concerns with the current design:\r\n* Besides SQLite, the loader should also work with the other engines supported by SQLAlchemy. (A better name for it in the current state would be `sqlite` :))\r\n* It should support arbitrary queries\/table names - only the latter currently works.\r\n* Exposing this loader as a packaged builder (`load_dataset(\"sql\", ...)`) is not a good idea for the following reasons:\r\n * Considering the scenario where a table with the same name is present in multiple files is very unlikely, the data files resolution is not needed here. And if we remove that, what the name of the default split should be? \"train\"?\r\n * `load_dataset(\"sql\", ...)` also implies that streaming should work, but that's not the case. And I don't think we can change that, considering how hard it is to make SQLite files streamable.\r\n\r\nAll this makes me think we shouldn't expose this builder as a packaged module and, instead, limit the API to `Dataset.from_sql`\/`Dataset.to_sql` (with the signatures matching the ones in pandas as much as possible; regarding this, note that SQLAlchemy connections are not hashable\/picklable, which is required for caching, but I think it's OK only to allow URI strings as connections to bypass that (Dask has the same limitation).\r\n\r\nWDYT?","Hi @mariosasko thank you for your review.\r\n\r\nI agree that `load_dataset('sql',...)` is a bit weird and I would be happy to remove it. To be honest, I only added it when I saw that it was the preferred way in `loading.mdx`. \r\n\r\nI agree that the `SELECT` should be a parameters as well. I'll add it.\r\n\r\nSo far, only `Dataset.to_sql` explicitly supports any SQLAlchemy Connexion, I'm pretty sure that `Dataset.from_sql` would work with a Connexion as well, but it would break the typing from the parent class which is `path_or_paths: NestedDataStructureLike[PathLike]`. I would prefer not to break this API Contract.\r\n\r\n\r\nI will have time to work on this over the weekend. Please let me know what you think if I do the following:\r\n* Remove `load_dataset('sql', ...)` and edit the documentation to use `to_sql, from_sql`.\r\n* Tentatively make `Dataset.from_sql` typing work with SQLAlchemy Connexion.\r\n* Add support for custom queries (Default would be `SELECT * FROM {table_name}`).\r\n\r\nCheers!","Perhaps after we merge https:\/\/github.com\/huggingface\/datasets\/pull\/4957 (**Done!**), you can subclass `AbstractDatasetInputStream` instead of `AbstractDatasetReader` to not break the contract with the connection object. Also, let's avoid having the default value for the query\/table (you can set it to `None` in the builder and raise an error in the builder config's `__post_init__` if it's not provided). Other than that, sounds good!"],"created_at":1662232148000,"updated_at":1663512582000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fixes #3094 \r\n\r\nAdd ability to read\/write to SQLite files and also read from any SQL database supported by SQLAlchemy.\r\n\r\nI didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional. \r\n\r\nI also recorded a Loom to showcase the feature.\r\n\r\nhttps:\/\/www.loom.com\/share\/f0e602c2de8a46f58bca4b43333d541f","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/reactions","total_count":8,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":4,"rocket":0,"eyes":2},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4928","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4928","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4928.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4928.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4927","id":1360428139,"node_id":"PR_kwDODunzps4-S0we","number":4927,"title":"fix BLEU metric card","user":{"login":"antoniolanza1996","id":40452030,"node_id":"MDQ6VXNlcjQwNDUyMDMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40452030?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/antoniolanza1996","html_url":"https:\/\/github.com\/antoniolanza1996","followers_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/followers","following_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/orgs","repos_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/repos","events_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662138056000,"updated_at":1662740895000,"closed_at":1662740895000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I've fixed some typos in BLEU metric card.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4927","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4927","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4927.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4927.patch","merged_at":1662740895000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4926","id":1360384484,"node_id":"PR_kwDODunzps4-Srm1","number":4926,"title":"Dataset infos in yaml","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4926). All of your documentation changes will be reflected on that endpoint.","Alright this is ready for review :)\r\nI mostly would like your opinion on the YAML structure and what we can do in the docs (IMO we can add the docs about those fields in the Hub docs). Other than that let me know if the changes in info.py and features.py look good to you"],"created_at":1662135005000,"updated_at":1663004114000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place.\r\n\r\nTo be more specific, I moved these fields from DatasetInfo to the YAML:\r\n- config_name (if there are several configs)\r\n- download_size\r\n- dataset_size\r\n- features\r\n- splits\r\n\r\nHere is what I ended up with for `squad`:\r\n```yaml\r\ndataset_infos:\r\n features:\r\n - name: id\r\n dtype: string\r\n - name: title\r\n dtype: string\r\n - name: context\r\n dtype: string\r\n - name: question\r\n dtype: string\r\n - name: answers\r\n sequence:\r\n - name: text\r\n dtype: string\r\n - name: answer_start\r\n dtype: int32\r\n splits:\r\n - name: train\r\n num_bytes: 79346360\r\n num_examples: 87599\r\n - name: validation\r\n num_bytes: 10473040\r\n num_examples: 10570\r\n download_size: 35142551\r\n dataset_size: 89819400\r\n```\r\n\r\nand it can be a list if there are several configs\r\n\r\nI already did the change for `conll2000` and `crime_and_punish` as an example.\r\n\r\n## Implementation details\r\n\r\n### Load\/Read\r\n\r\nThis is done via `DatasetInfoDict.write_to_directory\/from_directory`\r\n\r\nI had to implement a custom the YAML export logic for `SplitDict`, `Version` and `Features`.\r\nThe first two are trivial, but the logic for `Features` is more complicated, because I added a simplification step (or the YAML would be too long and less readable): it's just a formatting step to remove unnecessary nesting of YAML data.\r\n\r\n### Other changes\r\n\r\nI had to update the DatasetModule factories to also download the README.md alongside the dataset scripts\/data files, and not just the dataset_infos.json\r\n\r\n## YAML validation\r\n\r\nI removed the old validation code that was in metadata.py, now we can just use the Hub YAML validation\r\n\r\n## Datasets-cli\r\n\r\nThe `datasets-cli test --save_infos` command now creates a README.md file with the dataset_infos in it, instead of a datasets_infos.json file\r\n\r\n## Backward compatibility\r\n\r\n`dataset_infos.json` files are still supported and loaded if they exist to have full backward compatibility.\r\nThough I removed the unnecessary keys when the value is the default (like all the `id: null` from the Value feature types) to make them easier to read.\r\n\r\n## TODO\r\n\r\n- [x] add comments\r\n- [x] tests\r\n- [ ] document the new YAML fields (to be done in the Hub docs)\r\n- [x] try to reload the new dataset_infos.json file content with an old version of `datasets`\r\n\r\n## EDITS\r\n\r\n- removed \"config_name\" when there's only one config\r\n- removed \"version\" for now (?), because it's not useful in general\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4876","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4926","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4926","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4926.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4926.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4925","id":1360007616,"node_id":"PR_kwDODunzps4-RbP5","number":4925,"title":"Add note about loading image \/ audio files to docs","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4925). All of your documentation changes will be reflected on that endpoint.","Thanks for the feedback @polinaeterna ! I've reworded the docs a bit to integrate your comments and this should be ready for another review :)","> I've just realized that there is another PR about audio documentation open: #4872\r\n> and there the more detailed description on how to use `audiofolder` is moved to another section (\"Create an audio dataset\")\r\n\r\nAh yes, let's add a comment to #4872 - that will be simpler than the alternatives :)","@polinaeterna @lhoestq What do you think about adding support for the metadata format from Kaggle (one metadata file for each split with the name equal to the split name) to ImageFolder\/AudioFolder? I also think we can relax some requirements a bit by:\r\n* allowing `filename` as the name of the main metadata column (currently, only `file_path` is allowed)\r\n* not requiring that the features of all the given metadata files are equal. Instead, we can have a soft check by using `_check_if_features_can_be_aligned` + `_align_features`. The rationale is that train\/val metadata often has extra columns compared to test metadata.\r\n\r\nThese changes would allow us to load the Kaggle dataset linked in the forum thread without any \"interventions\".\r\n\r\nPS: this metadata format for ImageFolder was also proposed by @abhishekkrthakur initially.\r\n","Can you give more details about the Kaggle format ? I'm down to discuss it in a separate issue if you don't mind.\r\n\r\n> allowing filename as the name of the main metadata column (currently, only file_path is allowed)\r\n\r\n`filename` refers to the name of the file, so there's no logic about relative path or directories. If I recall correctly this is what we're doing right now so why not\r\n\r\n> not requiring that the features of all the given metadata files are equal. Instead, we can have a soft check by using _check_if_features_can_be_aligned + _align_features. The rationale is that train\/val metadata often has extra columns compared to test metadata.\r\n\r\n+1 and we can set to None the missing features","I'm not sure if this is worth opening a new issue :).\r\n\r\nWhat I mean by the Kaggle format is the structure like this one (the name of a metadata file is equal to the directory it \"references\"):\r\n```\r\n- train\r\n - img1.jpeg\r\n - img2.jpeg\r\n - ...\r\n- test\r\n - img1.jpeg\r\n - img2.jpeg\r\n - ... \r\n- train.csv\r\n- test.csv\r\n```\r\n\r\n\r\n","Sounds nice !"],"created_at":1662114718000,"updated_at":1663345231000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds a small note about how to load image \/ audio datasets that have multiple splits in their dataset structure.\r\n\r\nRelated forum thread: https:\/\/discuss.huggingface.co\/t\/loading-train-and-test-splits-with-audiofolder\/22447\r\n\r\ncc @NielsRogge ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4925","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4925","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4925.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4925.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4924","id":1358611513,"node_id":"I_kwDODunzps5Q-sQ5","number":4924,"title":"Concatenate_datasets loads everything into RAM","user":{"login":"louisdeneve","id":39416047,"node_id":"MDQ6VXNlcjM5NDE2MDQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39416047?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/louisdeneve","html_url":"https:\/\/github.com\/louisdeneve","followers_url":"https:\/\/api.github.com\/users\/louisdeneve\/followers","following_url":"https:\/\/api.github.com\/users\/louisdeneve\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/louisdeneve\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/louisdeneve\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/louisdeneve\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/louisdeneve\/orgs","repos_url":"https:\/\/api.github.com\/users\/louisdeneve\/repos","events_url":"https:\/\/api.github.com\/users\/louisdeneve\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/louisdeneve\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662027917000,"updated_at":1662033054000,"closed_at":1662033054000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ngcs = gcsfs.GCSFileSystem(project='project')\r\ndatasets = [load_from_disk(f'path\/to\/slice\/of\/data\/{i}', fs=gcs, keep_in_memory=False) for i in range(10)]\r\n\r\ndataset = concatenate_datasets(datasets)\r\n```\r\n\r\n## Expected results\r\nA concatenated dataset which is stored on my disk.\r\n\r\n## Actual results\r\nConcatenated dataset gets loaded into RAM and overflows it which gets the process killed.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10\r\n- Python version: 3.8.13\r\n- PyArrow version: 8.0.1\r\n- Pandas version: 1.4.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4923","id":1357735287,"node_id":"PR_kwDODunzps4-Jv7C","number":4923,"title":"WIP: decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround ","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4923). All of your documentation changes will be reflected on that endpoint.","Thanks ! Should we still support torchaudio>0.12 if it works ? And if it doesn't we can explain that downgrading is the right solution, or alternatively use librosa","@lhoestq \r\n\r\n> Should we still support torchaudio>0.12 if it works ? And if it doesn't we can explain that downgrading is the right solution, or alternatively use librosa\r\n\r\nI'm not sure here, because from the one hand, if `torchaudio` works - it works 60 times faster then `librosa`.\r\nBut from the other hand, we will get inconsistent behavior (=different results of decoding) for users of `torchaudio>=0.12`. \r\nI'd better go for using `librosa` only to avoid inconsistency then. wdyt?","It seems a bit too constraining to not allow users who have a working torchaudio 0.12 setup to not use it. \r\n\r\nIf the issue is about avoiding silent errors if the decoding changes, maybe we can log which back-end is used ? It can even be a warning with performance suggestions (\"you're using librosa but torchaudio 0.xx is recommended\").\r\n\r\nNote that users can still have a requirements.txt or whatever in their projects if they really want full reproducibility (and it's the bare minimum imo)\r\n\r\nThere are multiple possible back-ends so it's maybe not reasonable to only allow one back-end, especially since each back-end has installation constrains and there's no \"best\" back-end."],"created_at":1661972279000,"updated_at":1663267585000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"`torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works)\r\n\r\nanother option would be to ask users to install the required version of `ffmpeg`, but is non-trivial on colab: it's not in apt packages in ubuntu 18 and `conda` is not preinstalled (with `conda` it would be easily installable)\r\n\r\n- [x] decode with torchaudio anyway if the version of ffmpeg is correct? it's 60 times faster\r\n- [ ] tests \r\n- [ ] ... \r\n\r\nsee https:\/\/github.com\/huggingface\/datasets\/issues\/4776 and https:\/\/github.com\/huggingface\/datasets\/issues\/3663#issuecomment-1225797165 (there is a Colab notebook to reproduce the error)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4923","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4923","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4923.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4923.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4922","id":1357684018,"node_id":"I_kwDODunzps5Q7J0y","number":4922,"title":"I\/O error on Google Colab in streaming mode","user":{"login":"jotterbach","id":5595043,"node_id":"MDQ6VXNlcjU1OTUwNDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5595043?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jotterbach","html_url":"https:\/\/github.com\/jotterbach","followers_url":"https:\/\/api.github.com\/users\/jotterbach\/followers","following_url":"https:\/\/api.github.com\/users\/jotterbach\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jotterbach\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jotterbach\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jotterbach\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jotterbach\/orgs","repos_url":"https:\/\/api.github.com\/users\/jotterbach\/repos","events_url":"https:\/\/api.github.com\/users\/jotterbach\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jotterbach\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1661969306000,"updated_at":1661969748000,"closed_at":1661969748000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen trying to load a streaming dataset in Google Colab the loading fails with an I\/O error\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\nfrom datasets import load_dataset\r\nhf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)\r\nlist(hf_ds.take(5))\r\n```\r\n\r\n## Expected results\r\nIt should load five data points\r\n\r\n## Actual results\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in \r\n 2 from datasets import load_dataset\r\n 3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)\r\n----> 4 list(hf_ds.take(5))\r\n\r\n6 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py](https:\/\/localhost:8080\/#) in __iter__(self)\r\n 716 \r\n 717 def __iter__(self):\r\n--> 718 for key, example in self._iter():\r\n 719 if self.features:\r\n 720 # `IterableDataset` automatically fills missing columns with None.\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py](https:\/\/localhost:8080\/#) in _iter(self)\r\n 706 else:\r\n 707 ex_iterable = self._ex_iterable\r\n--> 708 yield from ex_iterable\r\n 709 \r\n 710 def _iter_shard(self, shard_idx: int):\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py](https:\/\/localhost:8080\/#) in __iter__(self)\r\n 582 \r\n 583 def __iter__(self):\r\n--> 584 yield from islice(self.ex_iterable, self.n)\r\n 585 \r\n 586 def shuffle_data_sources(self, generator: np.random.Generator) -> \"TakeExamplesIterable\":\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py](https:\/\/localhost:8080\/#) in __iter__(self)\r\n 110 \r\n 111 def __iter__(self):\r\n--> 112 yield from self.generate_examples_fn(**self.kwargs)\r\n 113 \r\n 114 def shuffle_data_sources(self, generator: np.random.Generator) -> \"ExamplesIterable\":\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt19\/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0\/wmt_utils.py](https:\/\/localhost:8080\/#) in _generate_examples(self, split_subsets, extraction_map, with_translation)\r\n 845 raise ValueError(\"Invalid number of files: %d\" % len(files))\r\n 846 \r\n--> 847 for sub_key, ex in sub_generator(*sub_generator_args):\r\n 848 if not all(ex.values()):\r\n 849 continue\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt19\/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0\/wmt_utils.py](https:\/\/localhost:8080\/#) in _parse_parallel_sentences(f1, f2, filename1, filename2)\r\n 923 l2_sentences, l2 = parse_file(f2_i, filename2)\r\n 924 \r\n--> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)):\r\n 926 key = f\"{f_id}\/{line_id}\"\r\n 927 yield key, {l1: s1, l2: s2}\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt19\/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0\/wmt_utils.py](https:\/\/localhost:8080\/#) in gen()\r\n 895 \r\n 896 def gen():\r\n--> 897 with open(path, encoding=\"utf-8\") as f:\r\n 898 for line in f:\r\n 899 seg_match = re.match(seg_re, line)\r\n\r\nValueError: I\/O operation on closed file.\r\n```\r\n\r\n## Environment info\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0)\r\n- Pandas version: 1.3.5\r\n\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/timeline","performed_via_github_app":null,"state_reason":"not_planned","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4921","id":1357609003,"node_id":"PR_kwDODunzps4-JVFV","number":4921,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661964747000,"updated_at":1662008808000,"closed_at":1662008693000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891\r\n- #4896\r\n- #4908","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4921","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4921","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4921.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4921.patch","merged_at":1662008693000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4920","id":1357564589,"node_id":"I_kwDODunzps5Q6sqt","number":4920,"title":"Unable to load local tsv files through load_dataset method","user":{"login":"DataNoob0723","id":44038517,"node_id":"MDQ6VXNlcjQ0MDM4NTE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44038517?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DataNoob0723","html_url":"https:\/\/github.com\/DataNoob0723","followers_url":"https:\/\/api.github.com\/users\/DataNoob0723\/followers","following_url":"https:\/\/api.github.com\/users\/DataNoob0723\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DataNoob0723\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DataNoob0723\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DataNoob0723\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DataNoob0723\/orgs","repos_url":"https:\/\/api.github.com\/users\/DataNoob0723\/repos","events_url":"https:\/\/api.github.com\/users\/DataNoob0723\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DataNoob0723\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV\/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/package_reference\/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_files=data_files)\r\n``` "],"created_at":1661962419000,"updated_at":1662010290000,"closed_at":1662010290000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nUnable to load local tsv files through load_dataset method.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\ndata_files = {\r\n 'train': 'train.tsv',\r\n 'test': 'test.tsv'\r\n}\r\nraw_datasets = load_dataset('tsv', data_files=data_files)\r\n\r\n## Expected results\r\nI am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions.\r\n\r\n## Actual results\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in \r\n----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv')\r\n\r\n2 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1244 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1245 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n-> 1246 ) from None\r\n 1247 raise e1 from None\r\n 1248 else:\r\n\r\nFileNotFoundError: Couldn't find a dataset script at \/content\/tsv\/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/main\/datasets\/tsv\/tsv.py\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4919","id":1357441599,"node_id":"PR_kwDODunzps4-IxDZ","number":4919,"title":"feat: improve error message on Keys mismatch. closes #4917","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","We are having an unrelated issue that makes several tests fail. We are working on that. Once fixed, you will be able to merge the main branch into this, so that you get the fix and the tests pass..."],"created_at":1661956896000,"updated_at":1662367561000,"closed_at":1662367413000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hi @lhoestq what do you think?\r\n\r\nLet me give you a code sample:\r\n```py\r\n>>> import datasets\r\n>>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]})\r\n>>> foo.save_to_disk('foo')\r\n# edit foo\/dataset_info.json e.g. rename the 'foo' feature to 'baz'\r\n>>> datasets.load_from_disk('foo')\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n in \r\n----> 1 datasets.load_from_disk('foo')\r\n\r\n~\/code\/datasets\/src\/datasets\/load.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1851 raise FileNotFoundError(f\"Directory {dataset_path} not found\")\r\n 1852 if fs.isfile(Path(dest_dataset_path, config.DATASET_INFO_FILENAME).as_posix()):\r\n-> 1853 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1854 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):\r\n 1855 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n\r\n~\/code\/datasets\/src\/datasets\/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1230 info=dataset_info,\r\n 1231 split=split,\r\n-> 1232 fingerprint=state[\"_fingerprint\"],\r\n 1233 )\r\n 1234 \r\n\r\n~\/code\/datasets\/src\/datasets\/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)\r\n 687 self.info.features = inferred_features\r\n 688 else: # make sure the nested columns are in the right order\r\n--> 689 self.info.features = self.info.features.reorder_fields_as(inferred_features)\r\n 690 \r\n 691 # Infer fingerprint if None\r\n\r\n~\/code\/datasets\/src\/datasets\/features\/features.py in reorder_fields_as(self, other)\r\n 1771 return source\r\n 1772 \r\n-> 1773 return Features(recursive_reorder(self, other))\r\n 1774 \r\n 1775 def flatten(self, max_depth=16) -> \"Features\":\r\n\r\n~\/code\/datasets\/src\/datasets\/features\/features.py in recursive_reorder(source, target, stack)\r\n 1760 f\"{source.keys()-target.keys()} are missing from dataset.arrow \"\r\n 1761 f\"and {target.keys()-source.keys()} are missing from dataset_info.json\"+stack_position)\r\n-> 1762 raise ValueError(message)\r\n 1763 return {key: recursive_reorder(source[key], target[key], stack + f\".{key}\") for key in target}\r\n 1764 elif isinstance(source, list):\r\n\r\nValueError: Keys mismatch: between {'baz': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (dataset_info.json) and {'foo': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (inferred from dataset.arrow).\r\n{'baz'} are missing from dataset.arrow and {'foo'} are missing from dataset_info.json\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4919","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4919","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4919.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4919.patch","merged_at":1662367413000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4918","id":1357242757,"node_id":"I_kwDODunzps5Q5eGF","number":4918,"title":"Dataset Viewer issue for pysentimiento\/spanish-targeted-sentiment-headlines","user":{"login":"finiteautomata","id":167943,"node_id":"MDQ6VXNlcjE2Nzk0Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/167943?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/finiteautomata","html_url":"https:\/\/github.com\/finiteautomata","followers_url":"https:\/\/api.github.com\/users\/finiteautomata\/followers","following_url":"https:\/\/api.github.com\/users\/finiteautomata\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/finiteautomata\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/finiteautomata\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/finiteautomata\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/finiteautomata\/orgs","repos_url":"https:\/\/api.github.com\/users\/finiteautomata\/repos","events_url":"https:\/\/api.github.com\/users\/finiteautomata\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/finiteautomata\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, it's fixed now (I refreshed it manually). It's a known issue; we hope it will be fixed permanently in a few days.\r\n\r\n\"Capture\r\n","Thanks @severo! "],"created_at":1661947747000,"updated_at":1662413794000,"closed_at":1662395564000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/pysentimiento\/spanish-targeted-sentiment-headlines\n\n### Description\n\nAfter moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist.\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4917","id":1357193841,"node_id":"I_kwDODunzps5Q5SJx","number":4917,"title":"Keys mismatch: make error message more informative","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good idea ! I think this can be improved in `Features.reorder_fields_as()` indeed at\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7feeb5648a63b6135a8259dedc3b1e19185ee4c7\/src\/datasets\/features\/features.py#L1739-L1740\r\n\r\nIs it something you would be interested in contributing ?","Is this open to work on? I'd love to take on this as my first issue.","Hi @daspartho I\u2019ve opened a PR #4919 \r\nI don\u2019t think there\u2019s much left to do","ok : )"],"created_at":1661945074000,"updated_at":1662367418000,"closed_at":1662367418000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nWhen loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don\u2019t know when\/why\/how this happens but it deserves its own issue), you will get an error message like:\r\n`ValueError: Keys mismatch: between {'bar': Value(dtype='int64', id=None)} and {'foo': Value(dtype='int64', id=None)}`\r\n\r\nWhich is fine when you have only a few features like in the example but it gets very hard to read when you have a lot of features in your dataset.\r\n\r\n**Describe the solution you'd like**\r\nThe error message should give the difference between the features (what keys are in A but missing in B and vice-versa). It should also tell which keys are inferred from `dataset.arrow` and which come from `dataset_info.json`.\r\n\r\nWilling to help :)\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4916","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4916\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4916\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4916\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4916","id":1357076940,"node_id":"I_kwDODunzps5Q41nM","number":4916,"title":"Apache Beam unable to write the downloaded wikipedia dataset","user":{"login":"Shilpac20","id":71849081,"node_id":"MDQ6VXNlcjcxODQ5MDgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/71849081?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Shilpac20","html_url":"https:\/\/github.com\/Shilpac20","followers_url":"https:\/\/api.github.com\/users\/Shilpac20\/followers","following_url":"https:\/\/api.github.com\/users\/Shilpac20\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Shilpac20\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Shilpac20\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Shilpac20\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Shilpac20\/orgs","repos_url":"https:\/\/api.github.com\/users\/Shilpac20\/repos","events_url":"https:\/\/api.github.com\/users\/Shilpac20\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Shilpac20\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["See:\r\n- #4915"],"created_at":1661938765000,"updated_at":1661943199000,"closed_at":1661943199000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nHi, I am currently trying to download wikipedia dataset using\r\nload_dataset(\"wikipedia\", language=\"aa\", date=\"20220401\", split=\"train\",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"wikipedia\", language=\"aa\", date=\"20220401\", split=\"train\",beam_runner='DirectRunner')\r\n```\r\n\r\n## Expected results\r\nto load the dataset\r\n\r\n## Actual results\r\nI am pasting the error trace here:\r\nDownloading builder script: 35.9kB [00:00, ?B\/s]\r\nDownloading metadata: 30.4kB [00:00, 1.94MB\/s]\r\nUsing custom data configuration 20220401.aa-date=20220401,language=aa\r\nDownloading and preparing dataset wikipedia\/20220401.aa to C:\\Users\\Shilpa.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 11.1k\/11.1k [00:00<00:00, 712kB\/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:02<00:00, 2.82s\/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00 You can find the full list of languages and dates [here](https:\/\/dumps.wikimedia.org\/backup-index.html).\r\n\r\nThis means that, before passing a specific date, you should first make sure it is available online, as Wikimedia only keeps last X months (depending on the size of the corresponding language dump)): e.g. to see which dates \"aa\" Wikipedia is available online, see https:\/\/dumps.wikimedia.org\/aawiki\/ (as of today 2022-08-31, the available dates are from [20220401](https:\/\/dumps.wikimedia.org\/aawiki\/20220401\/) to [20220820](https:\/\/dumps.wikimedia.org\/aawiki\/20220820\/)).","Hi, the date that I have specified \"20220401\" is available for the language \"aa\". The error persists for any other available dates as present in https:\/\/dumps.wikimedia.org\/aawiki\/. The error is mainly due to apache beam not able to write the downloaded files. Any help on this?","I see, sorry, I misread your issue.\r\n\r\nWe are investigating this."],"created_at":1661876146000,"updated_at":1661943175000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nHi, I am currently trying to download wikipedia dataset using \r\nload_dataset(\"wikipedia\", language=\"aa\", date=\"20220401\", split=\"train\",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download.\r\n\r\n\r\nEnvironment:\r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"wikipedia\", language=\"aa\", date=\"20220401\", split=\"train\",beam_runner='DirectRunner')\r\n```\r\n\r\n## Expected results\r\nto load the dataset\r\n\r\n## Actual results\r\nI am pasting the error trace here:\r\nDownloading builder script: 35.9kB [00:00, ?B\/s]\r\nDownloading metadata: 30.4kB [00:00, 1.94MB\/s]\r\nUsing custom data configuration 20220401.aa-date=20220401,language=aa\r\nDownloading and preparing dataset wikipedia\/20220401.aa to C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 11.1k\/11.1k [00:00<00:00, 712kB\/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:02<00:00, 2.82s\/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00\r\n beam_runner='DirectRunner')\r\n File \"G:\\Python3.7\\lib\\site-packages\\datasets\\load.py\", line 1751, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"G:\\Python3.7\\lib\\site-packages\\datasets\\builder.py\", line 705, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"G:\\Python3.7\\lib\\site-packages\\datasets\\builder.py\", line 1394, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\pipeline.py\", line 574, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\direct\\direct_runner.py\", line 131, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 201, in run_pipeline\r\n options)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 212, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 443, in run_stages\r\n runner_execution_context, bundle_context_manager, bundle_input)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 776, in _execute_bundle\r\n bundle_manager))\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 1000, in _run_bundle\r\n data_input, data_output, input_timers, expected_timer_output)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 1309, in process_bundle\r\n result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\worker_handlers.py\", line 380, in push\r\n response = self.worker.do_instruction(request)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\worker\\sdk_worker.py\", line 598, in do_instruction\r\n getattr(request, request_type), request.instruction_id)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\worker\\sdk_worker.py\", line 635, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\worker\\bundle_processor.py\", line 1004, in process_bundle\r\n element.data)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\worker\\bundle_processor.py\", line 227, in process_encoded\r\n self.output(decoded_value)\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 526, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 528, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 905, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"apache_beam\\runners\\common.py\", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\iobase.py\", line 1193, in process\r\n self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\options\\value_provider.py\", line 193, in _f\r\n return fnc(self, *args, **kwargs)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\filebasedsink.py\", line 202, in open_writer\r\n return FileBasedSinkWriter(self, writer_path)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\filebasedsink.py\", line 419, in __init__\r\n self.temp_handle = self.sink.open(temp_shard_path)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\parquetio.py\", line 553, in open\r\n self._file_handle = super().open(temp_path)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\options\\value_provider.py\", line 193, in _f\r\n return fnc(self, *args, **kwargs)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\filebasedsink.py\", line 139, in open\r\n temp_path, self.mime_type, self.compression_type)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\filesystems.py\", line 224, in create\r\n return filesystem.create(path, mime_type, compression_type)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\localfilesystem.py\", line 163, in create\r\n return self._path_open(path, 'wb', mime_type, compression_type)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\localfilesystem.py\", line 140, in _path_open\r\n raw_file = io.open(path, mode)\r\nRuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\\\Users\\\\Shilpa\\\\.cache\\\\huggingface\\\\datasets\\\\wikipedia\\\\20220401.aa-date=20220401,language=aa\\\\2.0.0\\\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train\/Save to parquet\/Write\/WriteImpl\/WriteBundles']\r\n\r\n## Environment info\r\nPython: 3.7.6\r\nWindows 10 Pro\r\ndatasets :2.4.0\r\napache_beam: 2.41.0\r\nmwparserfromhell: 0.6.4\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4915\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4915\/timeline","performed_via_github_app":null,"state_reason":"reopened","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4914","id":1355482624,"node_id":"PR_kwDODunzps4-CFyN","number":4914,"title":"Support streaming swda dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661852788000,"updated_at":1661858193000,"closed_at":1661858056000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming swda dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4914","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4914","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4914.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4914.patch","merged_at":1661858055000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4913","id":1355232007,"node_id":"PR_kwDODunzps4-BP00","number":4913,"title":"Add license and citation information to cosmos_qa dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661840599000,"updated_at":1661852971000,"closed_at":1661852855000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0.\r\n\r\nThis PR also updates the citation information.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4913","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4913","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4913.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4913.patch","merged_at":1661852855000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4912","id":1355078864,"node_id":"I_kwDODunzps5QxNzQ","number":4912,"title":"datasets map() handles all data at a stroke and takes long time","user":{"login":"BruceStayHungry","id":40711748,"node_id":"MDQ6VXNlcjQwNzExNzQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40711748?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BruceStayHungry","html_url":"https:\/\/github.com\/BruceStayHungry","followers_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/followers","following_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/orgs","repos_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/repos","events_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Interesting question ;)\r\n\r\n> Which is better? Process in map() or in data-collator\r\n\r\nAs you said, both can be used in practice: map() if you want to preprocess before training, or a data-collator (or the equivalent `dataset.set_transform`) if you want to preprocess on-the-fly during training. Both options are great and really depend on your case.\r\n\r\nTo choose between the two, here are IMO the main caveats of each approach:\r\n- if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n- on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\n> Why huggingface advises map() function? There should be some advantages to using map()\r\n\r\nTo get the best throughput when training a model, it is often recommended to preprocess your dataset before training. Note that preprocessing may include other steps before tokenization such as data filtering, cleaning, chunking etc. which are often done before training.","Thanks for your clear explanation @lhoestq ! \r\n> * if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n> * on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\nI really agree with you. There should be some trade-off between processing before and during the train loop.\r\nBesides, I find `map()` function can cache the results once it has been executed. Very useful!","I'm closing this issue if you don't mind, feel free to reopen if needed ;)"],"created_at":1661826356000,"updated_at":1662456215000,"closed_at":1662456215000,"author_association":"NONE","active_lock_reason":null,"body":"**1. Background**\r\n\r\nHuggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop. \r\n\r\nThe corresponding code:\r\n```\r\nwith accelerator.main_process_first():\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not args.overwrite_cache,\r\n desc=\"Running tokenizer on every text in dataset\"\r\n )\r\n```\r\n\r\n**2. The problem**\r\n\r\nThus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize.\r\n\r\nAlso, we can choose to tokenize data in `data-collator`. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization.\r\n\r\n**3. My question**\r\n\r\nAs described above, my questions are:\r\n* **Which is better? Process in `map()` or in `data-collator`**\r\n* **Why huggingface advises `map()` function?** There should be some advantages to using `map()`\r\n\r\n\r\nThanks for your answers!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4911","id":1354426978,"node_id":"I_kwDODunzps5Quupi","number":4911,"title":"[Tests] Ensure `datasets` supports renamed repositories","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":3761482852,"node_id":"LA_kwDODunzps7gM6xk","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20second%20issue","name":"good second issue","color":"BDE59C","default":false,"description":"Issues a bit more difficult than \"Good First\" issues"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You could also switch to using `huggingface_hub` more directly, where such a guarantee is already tested =)\r\n\r\ncc @Wauplin "],"created_at":1661784374000,"updated_at":1661787063000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"On https:\/\/hf.co\/datasets you can rename a dataset (or sometimes move it to another user\/org). The website handles redirections correctly and AFAIK `datasets` does as well.\r\n\r\nHowever it would be nice to have an integration test to make sure we don't break support for renamed datasets.\r\n\r\nTo implement this we can use the \/api\/repos\/move endpoint on hub-ci to rename\/move a repo (it is documented at https:\/\/huggingface.co\/docs\/hub\/api)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4910","id":1354374328,"node_id":"I_kwDODunzps5Quhy4","number":4910,"title":"Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder()","user":{"login":"bablf","id":57184353,"node_id":"MDQ6VXNlcjU3MTg0MzUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57184353?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bablf","html_url":"https:\/\/github.com\/bablf","followers_url":"https:\/\/api.github.com\/users\/bablf\/followers","following_url":"https:\/\/api.github.com\/users\/bablf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bablf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bablf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bablf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bablf\/orgs","repos_url":"https:\/\/api.github.com\/users\/bablf\/repos","events_url":"https:\/\/api.github.com\/users\/bablf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bablf\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"open","locked":false,"assignee":{"login":"thepurpleowl","id":21123710,"node_id":"MDQ6VXNlcjIxMTIzNzEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21123710?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thepurpleowl","html_url":"https:\/\/github.com\/thepurpleowl","followers_url":"https:\/\/api.github.com\/users\/thepurpleowl\/followers","following_url":"https:\/\/api.github.com\/users\/thepurpleowl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thepurpleowl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thepurpleowl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thepurpleowl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thepurpleowl\/orgs","repos_url":"https:\/\/api.github.com\/users\/thepurpleowl\/repos","events_url":"https:\/\/api.github.com\/users\/thepurpleowl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thepurpleowl\/received_events","type":"User","site_admin":false},"assignees":[{"login":"thepurpleowl","id":21123710,"node_id":"MDQ6VXNlcjIxMTIzNzEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21123710?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thepurpleowl","html_url":"https:\/\/github.com\/thepurpleowl","followers_url":"https:\/\/api.github.com\/users\/thepurpleowl\/followers","following_url":"https:\/\/api.github.com\/users\/thepurpleowl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thepurpleowl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thepurpleowl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thepurpleowl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thepurpleowl\/orgs","repos_url":"https:\/\/api.github.com\/users\/thepurpleowl\/repos","events_url":"https:\/\/api.github.com\/users\/thepurpleowl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thepurpleowl\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https:\/\/huggingface.co\/docs\/datasets\/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0","In my case, this was happening because I defined multiple `BuilderConfig` for multiple types, but didn't had all the data files that are requierd by those configs. \r\n\r\nI think this is different than the original issue by @bablf .","Hi ! I think this can be fixed by letting the config_kwargs take over the builder kwargs here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7feeb5648a63b6135a8259dedc3b1e19185ee4c7\/src\/datasets\/load.py#L1533-L1534\r\n\r\nmaybe something like this ?\r\n```python\r\n **{**builder_kwargs, **config_kwargs}\r\n```\r\n\r\nLet me know if you'd like to contribute and fix this bug, so I can assign you :)\r\n\r\n> In my case, this was happening because I defined multiple BuilderConfig for multiple types, but didn't had all the data files that are requierd by those configs.\r\n> \r\n> I think this is different than the original issue by @bablf .\r\n\r\nFeel free to to open an new issue, I'd be happy to help\r\n","@lhoestq Yeah, I want to, please assign.","Cool thank you ! Let me know if you have questions or if I can help","@lhoestq On second thoughts, I think this might be expected behavior; although a better error message might help.\r\n\r\nReasoning: Given n configs, if no data file is provided for any config, then it should be an error. Then why it should not be the case if out of n configs, for some data files are provided but not for others. Also, I was using `--all_configs` flag with `dataset-cli test`.","Ok I see - maybe we should check the values of builder_kwargs raise an error if any key in config_kwargs tries to overwrite it ? The builder kwargs are determined from the builder's type and location (in some cases it forces the base_path, data_files and config name for example)"],"created_at":1661782308000,"updated_at":1663070326000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nIn `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError(\"type object got multiple values for keyword argument \"xyz\"). \r\n\r\nI ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be \r\n```python\r\nbuilder_cls = import_main_class(dataset_module.module_path)\r\nbuilder_kwargs = dataset_module.builder_kwargs\r\ndata_files = builder_kwargs.pop(\"data_files\", data_files)\r\nconfig_name = builder_kwargs.pop(\"config_name\", name)\r\nhash = builder_kwargs.pop(\"hash\")\r\nbase_path = builder_kwargs.pop(\"base_path\")\r\n```\r\nand then pass base_path into `builder_cls`.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"rotten_tomatoes\", base_path=\".\/sample_data\")\r\n```\r\n\r\n## Expected results\r\nThe docs state: `**config_kwargs` \u2014 Keyword arguments to be passed to the [BuilderConfig](https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/package_reference\/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/package_reference\/builder_classes#datasets.DatasetBuilder).\r\n\r\nSo I would expect to be able to pass the base_path into `load_dataset()`. \r\n## Actual results\r\nTypeError(\"type object got multiple values for keyword argument \"base_path\"). \r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: macOS-12.5-arm64-arm-64bit\r\n- Python version: 3.8.9\r\n- PyArrow version: 9.0.0\r\n\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4909","id":1353997788,"node_id":"PR_kwDODunzps499Fhe","number":4909,"title":"Update GLUE evaluation metadata","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661766224000,"updated_at":1661784809000,"closed_at":1661784678000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR updates the evaluation metadata for GLUE to:\r\n\r\n* Include defaults for all configs except `ax` (which only has a `test` split with no known labels)\r\n* Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private)\r\n* Fix the `task_id` for some existing defaults\r\n\r\ncc @sashavor @douwekiela ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4909","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4909","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4909.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4909.patch","merged_at":1661784678000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4908","id":1353995574,"node_id":"PR_kwDODunzps499FDS","number":4908,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661766113000,"updated_at":1661789729000,"closed_at":1661789587000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891\r\n- #4896","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4908","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4908","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4908.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4908.patch","merged_at":1661789587000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4907","id":1353808348,"node_id":"I_kwDODunzps5QsXnc","number":4907,"title":"None Type error for swda datasets","user":{"login":"hannan72","id":8229163,"node_id":"MDQ6VXNlcjgyMjkxNjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8229163?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hannan72","html_url":"https:\/\/github.com\/hannan72","followers_url":"https:\/\/api.github.com\/users\/hannan72\/followers","following_url":"https:\/\/api.github.com\/users\/hannan72\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hannan72\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hannan72\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hannan72\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hannan72\/orgs","repos_url":"https:\/\/api.github.com\/users\/hannan72\/repos","events_url":"https:\/\/api.github.com\/users\/hannan72\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hannan72\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?","Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.","Ok, let us know if you encounter the issue again ;)"],"created_at":1661756720000,"updated_at":1661870621000,"closed_at":1661870621000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI got `'NoneType' object is not callable` error while calling the swda datasets.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"swda\")\r\n```\r\n\r\n## Expected results\r\nRun without error\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Python version: 3.8.10\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4906","id":1353223925,"node_id":"I_kwDODunzps5QqI71","number":4906,"title":"Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)","user":{"login":"OPterminator","id":63536981,"node_id":"MDQ6VXNlcjYzNTM2OTgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/63536981?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/OPterminator","html_url":"https:\/\/github.com\/OPterminator","followers_url":"https:\/\/api.github.com\/users\/OPterminator\/followers","following_url":"https:\/\/api.github.com\/users\/OPterminator\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/OPterminator\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/OPterminator\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/OPterminator\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/OPterminator\/orgs","repos_url":"https:\/\/api.github.com\/users\/OPterminator\/repos","events_url":"https:\/\/api.github.com\/users\/OPterminator\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/OPterminator\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `datasets` library.\r\n - You could try to rename it and run it again.\r\n- Another cause could be the simultaneous use of the packages `nlp` and `datasets`. Please note that we renamed the Hugging Face `nlp` library to `datasets` more than 2 years ago: they are 2 versions of the same library.\r\n - Please try to update your script and use only `datasets` (`nlp` name is no longer in use and is out of date)."],"created_at":1661653404000,"updated_at":1661750596000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\nNot able to import datasets \r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\nimport os\r\nos.environ[\"WANDB_API_KEY\"] = \"0\" ## to silence warning\r\nimport numpy as np\r\nimport random\r\nimport sklearn\r\nimport matplotlib.pyplot as plt\r\nimport pandas as pd\r\nimport sys\r\nimport tensorflow as tf\r\nimport plotly.express as px\r\nimport transformers\r\nimport tokenizers\r\nimport nlp as nlp\r\nimport utils\r\nimport datasets\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\nimport should work normal\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n in \r\n 13 import nlp as nlp\r\n 14 import utils\r\n---> 15 import datasets\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\__init__.py in \r\n 44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled\r\n 45 from .info import DatasetInfo, MetricInfo\r\n---> 46 from .inspect import (\r\n 47 get_dataset_config_info,\r\n 48 get_dataset_config_names,\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\inspect.py in \r\n 28 from .download.streaming_download_manager import StreamingDownloadManager\r\n 29 from .info import DatasetInfo\r\n---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory\r\n 31 from .utils.file_utils import relative_to_absolute_path\r\n 32 from .utils.logging import get_logger\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\load.py in \r\n 53 from .iterable_dataset import IterableDataset\r\n 54 from .metric import Metric\r\n---> 55 from .packaged_modules import (\r\n 56 _EXTENSION_TO_MODULE,\r\n 57 _MODULE_SUPPORTS_METADATA,\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\packaged_modules\\__init__.py in \r\n 4 from typing import List\r\n 5 \r\n----> 6 from .csv import csv\r\n 7 from .imagefolder import imagefolder\r\n 8 from .json import json\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\packaged_modules\\csv\\csv.py in \r\n 13 \r\n 14 \r\n---> 15 logger = datasets.utils.logging.get_logger(__name__)\r\n 16 \r\n 17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = [\"names\", \"prefix\"]\r\n\r\nAttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)\r\n\r\n## Environment info\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Windows-10-10.0.22000-SP0\r\n- Python version: 3.8.8\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.2.4\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4904","id":1353002837,"node_id":"PR_kwDODunzps4959Ad","number":4904,"title":"[LibriSpeech] Fix dev split local_extracted_archive for 'all' config","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","This PR fixes a bug introduced in:\r\n- #4184"],"created_at":1661594697000,"updated_at":1661853981000,"closed_at":1661853805000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2e7142a3c6500b560da45e8d5128e320a09fcbd4\/datasets\/librispeech_asr\/librispeech_asr.py#L60-L61\r\n\r\nThese keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`.\r\n\r\nHowever, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2e7142a3c6500b560da45e8d5128e320a09fcbd4\/datasets\/librispeech_asr\/librispeech_asr.py#L212\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2e7142a3c6500b560da45e8d5128e320a09fcbd4\/datasets\/librispeech_asr\/librispeech_asr.py#L219\r\n\r\nThe consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`.\r\n\r\nWhen defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2e7142a3c6500b560da45e8d5128e320a09fcbd4\/datasets\/librispeech_asr\/librispeech_asr.py#L259-L263\r\n\r\nThus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`).\r\n\r\nThis PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4904","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4904","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4904.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4904.patch","merged_at":1661853805000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4903","id":1352539075,"node_id":"PR_kwDODunzps494aud","number":4903,"title":"Fix CI reporting","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661534190000,"updated_at":1661536173000,"closed_at":1661536019000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.\r\n\r\nThis PR fixes a regression introduced by:\r\n- #4845\r\n\r\nThis introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4903","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4903","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4903.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4903.patch","merged_at":1661536019000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4902","id":1352469196,"node_id":"I_kwDODunzps5QnQrM","number":4902,"title":"Name the default config `default`","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1661530582000,"updated_at":1661530598000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Currently, if a dataset has no configuration, a default configuration is created from the dataset name.\r\n\r\nFor example, for a dataset loaded from the hub repository, such as https:\/\/huggingface.co\/datasets\/user\/dataset (repo id is `user\/dataset`), the default configuration will be `user--dataset`.\r\n\r\nIt might be easier to handle to set it to `default`, or another reserved word.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":1},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4901","id":1352438915,"node_id":"PR_kwDODunzps494FNX","number":4901,"title":"Raise ManualDownloadError from get_dataset_config_info","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661528756000,"updated_at":1661856141000,"closed_at":1661856004000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download.\r\n\r\nRelated to:\r\n- #4898\r\n\r\nCC: @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4901","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4901","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4901.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4901.patch","merged_at":1661856004000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4900","id":1352405855,"node_id":"I_kwDODunzps5QnBNf","number":4900,"title":"Dataset Viewer issue for asaxena1990\/Dummy_dataset","user":{"login":"ankurcl","id":56627657,"node_id":"MDQ6VXNlcjU2NjI3NjU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56627657?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ankurcl","html_url":"https:\/\/github.com\/ankurcl","followers_url":"https:\/\/api.github.com\/users\/ankurcl\/followers","following_url":"https:\/\/api.github.com\/users\/ankurcl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ankurcl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ankurcl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ankurcl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ankurcl\/orgs","repos_url":"https:\/\/api.github.com\/users\/ankurcl\/repos","events_url":"https:\/\/api.github.com\/users\/ankurcl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ankurcl\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Seems to be linked to the use of the undocumented `_resolve_features` method in the dataset viewer backend:\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"asaxena1990\/Dummy_dataset\", name=\"asaxena1990--Dummy_dataset\", split=\"train\", streaming=True)\r\nUsing custom data configuration asaxena1990--Dummy_dataset-4a704ed7e5627563\r\n>>> dataset._resolve_features()\r\nFailed to read file 'https:\/\/huggingface.co\/datasets\/asaxena1990\/Dummy_dataset\/resolve\/06885879a8bdd767d2d27695484fc6c83244617a\/dummy_dataset_train.json' with error : JSON parse error: Column() changed from object to array in row 0\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/json\/json.py\", line 109, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow\/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow\/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 1261, in _resolve_features\r\n features = _infer_features_from_batch(self._head())\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 686, in _head\r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 686, in \r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 708, in _iter\r\n yield from ex_iterable\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 112, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 651, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/json\/json.py\", line 137, in _generate_tables\r\n f\"This JSON file contain the following fields: {str(list(dataset.keys()))}. \"\r\nAttributeError: 'list' object has no attribute 'keys'\r\n```\r\n\r\nPinging @huggingface\/datasets","Hi ! JSON files containing a list of object are not supported yet, you can use JSON Lines files instead in the meantime\r\n```json\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n...\r\n```"],"created_at":1661526944000,"updated_at":1661532491000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4899","id":1352031286,"node_id":"PR_kwDODunzps492uTO","number":4899,"title":"Re-add code and und language tags","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661507337000,"updated_at":1661509638000,"closed_at":1661509460000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR fixes the removal of 2 language tags done by:\r\n- #4882\r\n\r\nThe tags are:\r\n- \"code\": this is not a IANA tag but needed\r\n- \"und\": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af\r\n - used in \"mc4\" and \"udhr\" datasets","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4899","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4899","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4899.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4899.patch","merged_at":1661509460000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4898","id":1351851254,"node_id":"I_kwDODunzps5Qk5z2","number":4898,"title":"Dataset Viewer issue for timit_asr","user":{"login":"InayatUllah932","id":91126978,"node_id":"MDQ6VXNlcjkxMTI2OTc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/91126978?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/InayatUllah932","html_url":"https:\/\/github.com\/InayatUllah932","followers_url":"https:\/\/api.github.com\/users\/InayatUllah932\/followers","following_url":"https:\/\/api.github.com\/users\/InayatUllah932\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/InayatUllah932\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/InayatUllah932\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/InayatUllah932\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/InayatUllah932\/orgs","repos_url":"https:\/\/api.github.com\/users\/InayatUllah932\/repos","events_url":"https:\/\/api.github.com\/users\/InayatUllah932\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/InayatUllah932\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB\/s]\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/timit_asr\/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2\/timit_asr.py\", line 117, in _split_generators\r\n data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.9.6\/lib\/python3.9\/posixpath.py\", line 231, in expanduser\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\ncc @huggingface\/datasets ","Due to license restriction, this dataset needs manual downloading of the original data.\r\n\r\nThis information is in the dataset card: https:\/\/huggingface.co\/datasets\/timit_asr\r\n> The dataset needs to be downloaded manually from https:\/\/catalog.ldc.upenn.edu\/LDC93S1","Maybe a better error message for datasets that need manual downloading? @severo \r\n\r\nMaybe we can raise a specific excpetion as done from `load_dataset`...","Yes, ideally something like https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/builder.py#L81\r\n"],"created_at":1661497925000,"updated_at":1661526229000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4897","id":1351784727,"node_id":"I_kwDODunzps5QkpkX","number":4897,"title":"datasets generate large arrow file","user":{"login":"osayes","id":18533904,"node_id":"MDQ6VXNlcjE4NTMzOTA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18533904?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osayes","html_url":"https:\/\/github.com\/osayes","followers_url":"https:\/\/api.github.com\/users\/osayes\/followers","following_url":"https:\/\/api.github.com\/users\/osayes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osayes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osayes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osayes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osayes\/orgs","repos_url":"https:\/\/api.github.com\/users\/osayes\/repos","events_url":"https:\/\/api.github.com\/users\/osayes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osayes\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?","@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 times in size. \r\nI think maybe it doesn' matter, it's just cache after all."],"created_at":1661493076000,"updated_at":1663477672000,"closed_at":1663477672000,"author_association":"NONE","active_lock_reason":null,"body":"Checking the large file in disk, and found the large cache file in the cifar10 data directory:\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/18533904\/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png)\r\n\r\nAs we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4896","id":1351180409,"node_id":"PR_kwDODunzps49z4fU","number":4896,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661445703000,"updated_at":1661489070000,"closed_at":1661488908000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4896","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4896","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4896.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4896.patch","merged_at":1661488908000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4895","id":1350798527,"node_id":"I_kwDODunzps5Qg4y_","number":4895,"title":"load_dataset method returns Unknown split \"validation\" even if this dir exists","user":{"login":"SamSamhuns","id":13418507,"node_id":"MDQ6VXNlcjEzNDE4NTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13418507?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SamSamhuns","html_url":"https:\/\/github.com\/SamSamhuns","followers_url":"https:\/\/api.github.com\/users\/SamSamhuns\/followers","following_url":"https:\/\/api.github.com\/users\/SamSamhuns\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SamSamhuns\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SamSamhuns\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SamSamhuns\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SamSamhuns\/orgs","repos_url":"https:\/\/api.github.com\/users\/SamSamhuns\/repos","events_url":"https:\/\/api.github.com\/users\/SamSamhuns\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SamSamhuns\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n","@SamSamhuns could you please try to load it with the current main-branch version of `datasets`? I suppose the problem is that it tries to get splits names from filenames in this case, ignoring directories names, but `val` wasn't in keywords at that time, but it was fixed recently in this PR https:\/\/github.com\/huggingface\/datasets\/pull\/4844. ","I have a similar problem.\r\nWhen I try to create `data_infos.json` using `datasets-cli test Peter.py --save_infos --all_configs` I get an error:\r\n`ValueError: Unknown split \"test\". Should be one of ['train'].`\r\n\r\nThe `data_infos.json` is created perfectly fine when I use only one split - `datasets.Split.TRAIN`\r\n\r\n@polinaeterna Could you help here please?\r\n\r\nYou can find the code here: https:\/\/huggingface.co\/datasets\/sberbank-ai\/Peter\/tree\/add_splits (add_splits branch)","@skalinin It seems the `dataset_infos.json` of your dataset is missing the info on the test split (and `datasets-cli` doesn't ignore the cached infos at the moment, which is a known bug), so your issue is not related to this one. I think you can fix your issue by deleting all the cached `dataset_infos.json` (in the local repo and in `~\/.cache\/huggingface\/modules`) before running the `datasets-cli test` command. Let us know if that doesn't help, and I can try to generate it myself.","This code indeed behaves as expected on `main`. But suppose the `val_234.png` is renamed to some other value not containing one of [these](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L31) keywords, in that case, this issue becomes relevant again because the real cause of it is the order in which we check the predefined split patterns to assign data files to each split - first we assign data files based on filenames, and only if this fails meaning not a single split found (`val` is not recognized here in the older versions of `datasets`, which results in an empty `validation` split), do we assign based on directory names.\r\n\r\n@polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L78-L79) of the patterns if `data_dir` is specified (or if `load_dataset(data_dir)` is called)? ","> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nyes that makes sense !","Looks like the `val\/validation` dir name issue is fixed with the current main-branch version of the `datasets` repository. \r\n\r\n> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nI agree with this as well. I would expect higher precedence to the directory name over the file name. Right now if I place a single file named `train_00001.jpg` under the `validation` directory, `load_dataset` cannot find the validation split.","Thanks for the reply\r\n\r\nI've created a separate [issue](https:\/\/github.com\/huggingface\/datasets\/issues\/4982#issue-1375604693) for my problem.","> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nSounds good to me! opened a PR: https:\/\/github.com\/huggingface\/datasets\/pull\/4985"],"created_at":1661429460000,"updated_at":1663327271000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nThe `datasets.load_dataset` returns a `ValueError: Unknown split \"validation\". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split=\"validation\")` even if the `validation` sub-directory exists in the local data path.\r\n\r\nThe data directories are as follows and attached to this issue:\r\n```\r\ntest_data1\r\n |_ train\r\n |_ 1012.png\r\n |_ metadata.jsonl\r\n ...\r\n |_ test\r\n ...\r\n |_ validation\r\n |_ 234.png\r\n |_ metadata.jsonl\r\n ...\r\ntest_data2\r\n |_ train\r\n |_ train_1012.png\r\n |_ metadata.jsonl\r\n ...\r\n |_ test\r\n ...\r\n |_ validation\r\n |_ val_234.png\r\n |_ metadata.jsonl\r\n ...\r\n```\r\n\r\nThey contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.\r\n`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`\r\n\r\nI actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\ndatasets.logging.set_verbosity_error()\r\nfrom datasets import load_dataset, get_dataset_split_names\r\n\r\n\r\n# the following only finds train, validation and test splits correctly\r\npath = \".\/test_data1\"\r\nprint(\"######################\", get_dataset_split_names(path), \"######################\")\r\n\r\ndataset_list = []\r\nfor spt in [\"train\", \"test\", \"validation\"]:\r\n dataset = load_dataset(path, split=spt)\r\n dataset_list.append(dataset)\r\n\r\n\r\n# the following only finds train and test splits\r\npath = \".\/test_data2\"\r\nprint(\"######################\", get_dataset_split_names(path), \"######################\")\r\n\r\ndataset_list = []\r\nfor spt in [\"train\", \"test\", \"validation\"]:\r\n dataset = load_dataset(path, split=spt)\r\n dataset_list.append(dataset)\r\n```\r\n\r\n\r\n## Expected results\r\n```\r\n###################### ['train', 'test', 'validation'] ######################\r\n###################### ['train', 'test', 'validation'] ######################\r\n```\r\n\r\n## Actual results\r\n```\r\nTraceback (most recent call last):\r\n File \"test_data_loader.py\", line 11, in \r\n\r\n dataset = load_dataset(path, split=spt)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1758, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 893, in as_dataset\r\n datasets = map_nested(\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 924, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 993, in _as_dataset\r\n dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 211, in read\r\n files = self.get_file_instructions(name, instructions, split_infos)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 184, in get_file_instructions\r\n file_instructions = make_file_instructions(\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 107, in make_file_instructions\r\n absolute_instructions = instruction.to_absolute(name2len)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 616, in to_absolute\r\n return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 616, in \r\n return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 433, in _rel_to_abs_instr\r\n raise ValueError(f'Unknown split \"{split}\". Should be one of {list(name2len)}.')\r\nValueError: Unknown split \"validation\". Should be one of ['train', 'test'].\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform: Linux Ubuntu 18.04\r\n- Python version: 3.8.12\r\n- PyArrow version: 9.0.0\r\n\r\nData files\r\n\r\n[test_data1.zip](https:\/\/github.com\/huggingface\/datasets\/files\/9424463\/test_data1.zip)\r\n[test_data2.zip](https:\/\/github.com\/huggingface\/datasets\/files\/9424468\/test_data2.zip)\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4894","id":1350667270,"node_id":"PR_kwDODunzps49yIvr","number":4894,"title":"Add citation information to makhzan dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661422600000,"updated_at":1661840514000,"closed_at":1661433581000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:\r\n- https:\/\/github.com\/zeerakahmed\/makhzan\/issues\/43","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4894","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4894","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4894.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4894.patch","merged_at":1661433581000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4893","id":1350655674,"node_id":"I_kwDODunzps5QgV66","number":4893,"title":"Oversampling strategy for iterable datasets in `interleave_datasets`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":3761482852,"node_id":"LA_kwDODunzps7gM6xk","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20second%20issue","name":"good second issue","color":"BDE59C","default":false,"description":"Issues a bit more difficult than \"Good First\" issues"}],"state":"open","locked":false,"assignee":{"login":"ylacombe","id":52246514,"node_id":"MDQ6VXNlcjUyMjQ2NTE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52246514?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ylacombe","html_url":"https:\/\/github.com\/ylacombe","followers_url":"https:\/\/api.github.com\/users\/ylacombe\/followers","following_url":"https:\/\/api.github.com\/users\/ylacombe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ylacombe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ylacombe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ylacombe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ylacombe\/orgs","repos_url":"https:\/\/api.github.com\/users\/ylacombe\/repos","events_url":"https:\/\/api.github.com\/users\/ylacombe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ylacombe\/received_events","type":"User","site_admin":false},"assignees":[{"login":"ylacombe","id":52246514,"node_id":"MDQ6VXNlcjUyMjQ2NTE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52246514?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ylacombe","html_url":"https:\/\/github.com\/ylacombe","followers_url":"https:\/\/api.github.com\/users\/ylacombe\/followers","following_url":"https:\/\/api.github.com\/users\/ylacombe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ylacombe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ylacombe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ylacombe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ylacombe\/orgs","repos_url":"https:\/\/api.github.com\/users\/ylacombe\/repos","events_url":"https:\/\/api.github.com\/users\/ylacombe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ylacombe\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n","Great @ylacombe thanks ! I'm assigning you this issue","Hi @ylacombe :) Is there anything I can do to help ? Feel free to ping me if you have any question :)","Hi @lhoestq,\r\n\r\nI actually have already wrote the code last time [on this commit](https:\/\/github.com\/ylacombe\/datasets\/commit\/84769db97facc78a33ec53f7b1b395951e1804df) but I still have to change the docs and write some tests though. I'm working on it.\r\n\r\nHowever, I still your advice on one matter. \r\nIn #4831, when using a `Dataset` list with probabilities, I had change the original behavior so that it stops as soon as one or all datasets are out of samples. By nature, this behavior can't be applied with an `IterableDataset` because one only knows an iterable dataset is out of sample when receiving a StopIteration error after calling the iterator once again. \r\nTo sum up, as it is right know, the behavior is not consistent with an `IterableDataset` list or a `Dataset` list, when using probabilities.\r\nTo be honest, I think that the current behavior with a `Dataset` list is desirable and avoid having too many samples, so I would recommand keeping that as it is, but I can understand the desire to have the same behavior for both classes. \r\nWhat do you think ? Please let me know if you need more details.\r\n\r\n\r\nEDIT:\r\nHere is an example:\r\n```\r\n>>> from tests.test_iterable_dataset import *\r\n>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\n>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\n>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n>>> [x[\"a\"] for x in dataset]\r\n[10, 0, 11, 1, 2, 20, 12, 13]\r\n>>> from tests.test_arrow_dataset import *\r\n>>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n>>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n>>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n>>> interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)[\"a\"]\r\n[10, 0, 11, 1, 2]\r\n[10, 0, 11, 1, 2]\r\n```\r\n ","Hi ! Awesome :) \r\n\r\nMaybe you can pre-load the next sample to know if the dataset is empty or not ?\r\nThis way it should be possible to have the same behavior for `IterableDataset`"],"created_at":1661422015000,"updated_at":1663069839000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"In https:\/\/github.com\/huggingface\/datasets\/pull\/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.\r\n\r\nIt would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy\r\n\r\n```python\r\n>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable\r\n>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {\"a\": i}) for i in [0, 1, 2]], {}))\r\n>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {\"a\": i}) for i in [10, 11, 12, 13]], {}))\r\n>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {\"a\": i}) for i in [20, 21, 22, 23, 24]], {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3]) # is supported\r\n>>> [x[\"a\"] for x in dataset]\r\n[0, 10, 20, 1, 11, 21, 2, 12, 22]\r\n>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy=\"all_exhausted\") # is not supported yet\r\n>>> [x[\"a\"] for x in dataset]\r\n[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]\r\n```\r\n\r\nThis can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`\r\n\r\nI would be happy to share some guidance if anyone would like to give it a shot :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4892","id":1350636499,"node_id":"PR_kwDODunzps49yCD3","number":4892,"title":"Add citation to ro_sts and ro_sts_parallel datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4892). All of your documentation changes will be reflected on that endpoint."],"created_at":1661421066000,"updated_at":1661424596000,"closed_at":1661424596000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information:\r\n- https:\/\/github.com\/dumitrescustefan\/RO-STS\/issues\/4","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4892","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4892","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4892.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4892.patch","merged_at":1661424596000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4891","id":1350589813,"node_id":"PR_kwDODunzps49x382","number":4891,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1661418857000,"updated_at":1661435015000,"closed_at":1661435014000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4891","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4891","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4891.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4891.patch","merged_at":1661435014000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4890","id":1350578029,"node_id":"PR_kwDODunzps49x1YC","number":4890,"title":"add Dataset.from_list","user":{"login":"sanderland","id":48946947,"node_id":"MDQ6VXNlcjQ4OTQ2OTQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48946947?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanderland","html_url":"https:\/\/github.com\/sanderland","followers_url":"https:\/\/api.github.com\/users\/sanderland\/followers","following_url":"https:\/\/api.github.com\/users\/sanderland\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanderland\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanderland\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanderland\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanderland\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanderland\/repos","events_url":"https:\/\/api.github.com\/users\/sanderland\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanderland\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@albertvillanova it seems tests fail on pyarrow 6, perhaps from_pylist is a v7 method? How do you usually handle these version differences?\r\nAdded something that at least works"],"created_at":1661418358000,"updated_at":1662114179000,"closed_at":1662114033000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"As discussed in #4885 \r\n\r\nI initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict. \r\nHowever, it seems the constructor takes care of filling info when it is empty.\r\n```\r\nif info.features is None:\r\n info.features = Features(\r\n {\r\n col: generate_from_arrow_type(coldata.type)\r\n for col, coldata in zip(pa_table.column_names, pa_table.columns)\r\n }\r\n )\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4890","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4890","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4890.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4890.patch","merged_at":1662114033000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4889","id":1349758525,"node_id":"I_kwDODunzps5Qc649","number":4889,"title":"torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.","torchaudio did a change in [0.12](https:\/\/github.com\/pytorch\/audio\/releases\/tag\/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 decoding is now handled by FFmpeg in sox_io backend. (https:\/\/github.com\/pytorch\/audio\/pull\/2419, https:\/\/github.com\/pytorch\/audio\/pull\/2428)\r\n> - FFmpeg is now used as fallback in sox_io backend, and now MP3 decoding is handled by FFmpeg. To load MP3 audio with torchaudio.load, please install a compatible version of FFmpeg (Version 4 when using an official binary distribution).\r\n> - Note that, whereas the previous MP3 decoding scheme pads the output audio, the new scheme does not. As a consequence, the new version returns shorter audio tensors."],"created_at":1661360083000,"updated_at":1661361068000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nWhen loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https:\/\/github.com\/huggingface\/transformers\/pull\/18749\r\n\r\n## Steps to reproduce the bug\r\n\r\nIf you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers.\r\n\r\n```python\r\n#!\/usr\/bin\/env python3\r\nfrom datasets import load_dataset\r\nimport datasets\r\nimport numpy as np\r\nimport torch\r\nimport torchaudio\r\nprint(\"torch vesion\", torch.__version__)\r\nprint(\"torchaudio vesion\", torchaudio.__version__)\r\n\r\nsave_audio = True\r\nload_audios = False\r\n\r\nif save_audio:\r\n ds = load_dataset(\"common_voice\", \"en\", split=\"train\", streaming=True)\r\n ds = ds.cast_column(\"audio\", datasets.Audio(sampling_rate=16_000))\r\n ds_iter = iter(ds)\r\n sample = next(ds_iter)\r\n\r\n np.save(f\"audio_sample_{torch.__version__}\", sample[\"audio\"][\"array\"])\r\n print(sample[\"audio\"][\"array\"])\r\n\r\nif load_audios:\r\n array_torch_11 = np.load(\"\/home\/patrick\/audio_sample_1.11.0+cu102.npy\")\r\n print(\"Array 11 Shape\", array_torch_11.shape)\r\n print(\"Array 11 abs sum\", np.sum(np.abs(array_torch_11)))\r\n array_torch_12 = np.load(\"\/home\/patrick\/audio_sample_1.12.1+cu102.npy\")\r\n print(\"Array 12 Shape\", array_torch_12.shape)\r\n print(\"Array 12 abs sum\", np.sum(np.abs(array_torch_12)))\r\n```\r\n\r\nHaving saved the tensors the print output yields:\r\n\r\n```\r\ntorch vesion 1.12.1+cu102\r\ntorchaudio vesion 0.12.1+cu102\r\nArray 11 Shape (122880,)\r\nArray 11 abs sum 1396.4988\r\nArray 12 Shape (123264,)\r\nArray 12 abs sum 1396.5193\r\n```\r\n\r\n## Expected results\r\ntorchaudio 11.0 and 12.1 should yield same results.\r\n\r\n## Actual results\r\nSee above.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.1.1.dev0\r\n- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4888","id":1349447521,"node_id":"I_kwDODunzps5Qbu9h","number":4888,"title":"Dataset Viewer issue for subjqa","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it.","Fixed \r\n\r\nhttps:\/\/huggingface.co\/datasets\/subjqa\r\n\r\n\"Capture\r\n"],"created_at":1661347580000,"updated_at":1662625422000,"closed_at":1662625422000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/subjqa\n\n### Description\n\nGetting the following error for this dataset:\r\n\r\n```\r\nStatus code: 500\r\nException: Status500Error\r\nMessage: 2 or more items returned, instead of 1\r\n```\r\n\r\nNot sure what's causing it though \ud83e\udd14 \n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4989","id":1376832233,"node_id":"I_kwDODunzps5SEMrp","number":4989,"title":"Running add_column() seems to corrupt existing sequence-type column info","user":{"login":"derek-rocheleau","id":93728165,"node_id":"U_kgDOBZYtpQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93728165?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/derek-rocheleau","html_url":"https:\/\/github.com\/derek-rocheleau","followers_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/followers","following_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/orgs","repos_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/repos","events_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/derek-rocheleau\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1663436525000,"updated_at":1663436525000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"I have a dataset that contains a column (\"foo\") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like:\r\n\r\nds = load_dataset(...)\r\ndf = ds.to_pandas()\r\n\r\ndf:\r\nfoo_0 | foo_1 | foo_2 | foo_3\r\n0.0 | 1.0 | 2.0 | 3.0\r\n\r\nIf I run .add_column(\"new_col\", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be:\r\n\r\nds = load_dataset(...)\r\nnew_ds = ds.add_column(\"new_col\", data)\r\ndf = new_ds.to_pandas()\r\n\r\ndf:\r\nfoo | new_col\r\n[0.0, 1.0, 2.0, 3.0] | new_val\r\n\r\nI've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4989\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4988","id":1376096584,"node_id":"I_kwDODunzps5SBZFI","number":4988,"title":"Add `IterableDataset.from_generator` to the API","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"open","locked":false,"assignee":{"login":"hamid-vakilzadeh","id":56002455,"node_id":"MDQ6VXNlcjU2MDAyNDU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56002455?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh","html_url":"https:\/\/github.com\/hamid-vakilzadeh","followers_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/followers","following_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/orgs","repos_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/repos","events_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/received_events","type":"User","site_admin":false},"assignees":[{"login":"hamid-vakilzadeh","id":56002455,"node_id":"MDQ6VXNlcjU2MDAyNDU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56002455?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh","html_url":"https:\/\/github.com\/hamid-vakilzadeh","followers_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/followers","following_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/orgs","repos_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/repos","events_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hamid-vakilzadeh\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["#take"],"created_at":1663341581000,"updated_at":1663434419000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.\r\n\r\ncc @lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4988\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4987","id":1376006477,"node_id":"PR_kwDODunzps4_GlIu","number":4987,"title":"Embed image\/audio data in dl_and_prepare parquet","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663337367000,"updated_at":1663345487000,"closed_at":1663345355000,"author_association":"MEMBER","active_lock_reason":null,"body":"Embed the bytes of the image or audio files in the Parquet files directly, instead of having a \"path\" that points to a local file.\r\n\r\nIndeed Parquet files are often used to share data or to be used by workers that may not have access to the local files.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4987\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4987","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4987","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4987.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4987.patch","merged_at":1663345355000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4986","id":1375895035,"node_id":"PR_kwDODunzps4_GNSd","number":4986,"title":"[doc] Fix broken snippet that had too many quotes","user":{"login":"tomaarsen","id":37621491,"node_id":"MDQ6VXNlcjM3NjIxNDkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37621491?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tomaarsen","html_url":"https:\/\/github.com\/tomaarsen","followers_url":"https:\/\/api.github.com\/users\/tomaarsen\/followers","following_url":"https:\/\/api.github.com\/users\/tomaarsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tomaarsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tomaarsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tomaarsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tomaarsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/tomaarsen\/repos","events_url":"https:\/\/api.github.com\/users\/tomaarsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tomaarsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in `datasets`, hah!\r\n\r\nAs for this PR, the issue seems solved according to the [new PR documentation](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4986\/en\/process#map):\r\n![image](https:\/\/user-images.githubusercontent.com\/37621491\/190646405-6afa06fa-9eac-48f6-ab30-2677944fb7b6.png)\r\n"],"created_at":1663332067000,"updated_at":1663366341000,"closed_at":1663349534000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hello!\r\n\r\n### Pull request overview\r\n* Fix broken snippet in https:\/\/huggingface.co\/docs\/datasets\/main\/en\/process that has too many quotes\r\n\r\n### Details\r\nThe snippet in question can be found here: https:\/\/huggingface.co\/docs\/datasets\/main\/en\/process#map\r\nThis screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly:\r\n![image](https:\/\/user-images.githubusercontent.com\/37621491\/190640627-f7587362-0e44-4464-a5d1-a0b98df6986f.png)\r\n\r\nThe change speaks for itself.\r\n\r\nThank you for the detailed documentation, by the way. \r\n\r\n- Tom Aarsen\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4986\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4986","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4986","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4986.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4986.patch","merged_at":1663349534000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4985","id":1375807768,"node_id":"PR_kwDODunzps4_F6kU","number":4985,"title":"[WIP] Prefer split patterns from directories over split patterns from filenames","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4985). All of your documentation changes will be reflected on that endpoint."],"created_at":1663327240000,"updated_at":1663334541000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"related to https:\/\/github.com\/huggingface\/datasets\/issues\/4895\r\n\r\ntodo:\r\n\r\n- [ ] test","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4985\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4985","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4985","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4985.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4985.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4984","id":1375690330,"node_id":"PR_kwDODunzps4_FhTm","number":4984,"title":"docs: \u270f\ufe0f add links to the Datasets API","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https:\/\/github.com\/huggingface\/datasets-server\/issues\/568"],"created_at":1663320852000,"updated_at":1663333814000,"closed_at":1663333653000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I added some links to the Datasets API in the docs. See https:\/\/github.com\/huggingface\/datasets-server\/pull\/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs.\r\n\r\nI'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4984\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4984","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4984","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4984.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4984.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4983","id":1375667654,"node_id":"I_kwDODunzps5R_wXG","number":4983,"title":"How to convert torch.utils.data.Dataset to huggingface dataset?","user":{"login":"DEROOCE","id":77595952,"node_id":"MDQ6VXNlcjc3NTk1OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/77595952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DEROOCE","html_url":"https:\/\/github.com\/DEROOCE","followers_url":"https:\/\/api.github.com\/users\/DEROOCE\/followers","following_url":"https:\/\/api.github.com\/users\/DEROOCE\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DEROOCE\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DEROOCE\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DEROOCE\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DEROOCE\/orgs","repos_url":"https:\/\/api.github.com\/users\/DEROOCE\/repos","events_url":"https:\/\/api.github.com\/users\/DEROOCE\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DEROOCE\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:\r\n # yield ex\r\n\r\ndset = Dataset.from_generator(gen)\r\n```"],"created_at":1663319710000,"updated_at":1663342106000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n```python\r\nfrom datasets import Dataset\r\ndata = [[1, 2],[3, 4]]\r\nds = Dataset.from_dict({\"data\": data})\r\nds = ds.with_format(\"torch\")\r\nds[0]\r\nds[:2]\r\n```\r\nSo is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert?\r\nThanks.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4983\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4982","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4982\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4982\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4982\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4982","id":1375604693,"node_id":"I_kwDODunzps5R_g_V","number":4982,"title":"Create dataset_infos.json with VALIDATION and TEST splits","user":{"login":"skalinin","id":26695348,"node_id":"MDQ6VXNlcjI2Njk1MzQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26695348?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skalinin","html_url":"https:\/\/github.com\/skalinin","followers_url":"https:\/\/api.github.com\/users\/skalinin\/followers","following_url":"https:\/\/api.github.com\/users\/skalinin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skalinin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skalinin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skalinin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skalinin\/orgs","repos_url":"https:\/\/api.github.com\/users\/skalinin\/repos","events_url":"https:\/\/api.github.com\/users\/skalinin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skalinin\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1663316479000,"updated_at":1663323163000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"The problem is described in that [issue](https:\/\/github.com\/huggingface\/datasets\/issues\/4895#issuecomment-1247975569). \r\n\r\n> When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error:\r\n> ValueError: Unknown split \"test\". Should be one of ['train'].\r\n> \r\n> The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN\r\n> \r\n> You can find the code here: https:\/\/huggingface.co\/datasets\/sberbank-ai\/Peter\/tree\/add_splits (add_splits branch)\r\n\r\nI tried to clear the cache folder, than I got an another error. I run:\r\n\r\n```\r\nrm -r ~\/.cache\/huggingface \r\ndatasets-cli test Peter.py --save_infos --all_configs\r\n```\r\n\r\nThe error message:\r\n```\r\nUsing custom data configuration default\r\nTesting builder 'default' (1\/1)\r\nDownloading and preparing dataset peter\/default to \/Users\/kalinin\/.cache\/huggingface\/datasets\/peter\/default\/0.0.0\/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d...\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4\/4 [00:00<00:00, 5160.63it\/s]\r\nExtracting data files: 0%| | 0\/4 [00:00\r\n sys.exit(main())\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/commands\/datasets_cli.py\", line 39, in main\r\n service.run()\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/commands\/test.py\", line 137, in run\r\n builder.download_and_prepare(\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1227, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 771, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/Users\/kalinin\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/Peter\/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d\/Peter.py\", line 23, in _split_generators\r\n data_files = dl_manager.download_and_extract(_URLS)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/download\/download_manager.py\", line 431, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/download\/download_manager.py\", line 403, in extract\r\n extracted_paths = map_nested(\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 393, in map_nested\r\n mapped = [\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 394, in \r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 330, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py\", line 213, in cached_path\r\n output_path = ExtractManager(cache_dir=download_config.cache_dir).extract(\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/extract.py\", line 46, in extract\r\n self.extractor.extract(input_path, output_path, extractor_format)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/extract.py\", line 263, in extract\r\n with FileLock(lock_path):\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 399, in __init__\r\n max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax\r\nFileNotFoundError: [Errno 2] No such file or directory: ''\r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 328, in __del__\r\n self.release(force=True)\r\n File \"\/usr\/local\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 303, in release\r\n with self._thread_lock:\r\nAttributeError: 'UnixFileLock' object has no attribute '_thread_lock'\r\nExtracting data files: 0%| | 0\/4 [00:00 1 Dataset.from_dict({\"x\": [1.0, 2.0, 3.0]}, features=Features(x=Value(\"float16\")))\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split)\r\n 865 mapping = features.encode_batch(mapping)\r\n 866 mapping = {\r\n 867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col)\r\n 868 for col, data in mapping.items()\r\n 869 }\r\n--> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping)\r\n 871 if info.features is None:\r\n 872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()})\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs)\r\n 734 @classmethod\r\n 735 def from_pydict(cls, *args, **kwargs):\r\n 736 \"\"\"\r\n 737 Construct a Table from Arrow arrays or columns\r\n 738 \r\n (...)\r\n 748 :class:`datasets.table.Table`:\r\n 749 \"\"\"\r\n--> 750 return cls(pa.Table.from_pydict(*args, **kwargs))\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/table.pxi:3648, in pyarrow.lib.Table.from_pydict()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/table.pxi:5174, in pyarrow.lib._from_pydict()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/array.pxi:343, in pyarrow.lib.asarray()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/array.pxi:231, in pyarrow.lib.array()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type)\r\n 192 # otherwise we can finally use the user's type\r\n 193 elif type is not None:\r\n 194 # We use cast_array_to_feature to support casting to custom types like Audio and Image\r\n 195 # Also, when trying type \"string\", we don't want to convert integers or floats to \"string\".\r\n 196 # We only do it if trying_type is False - since this is what the user asks for.\r\n--> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)\r\n 198 return out\r\n 199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:1683, in _wrap_for_chunked_arrays..wrapper(array, *args, **kwargs)\r\n 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n 1682 else:\r\n-> 1683 return func(array, *args, **kwargs)\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str)\r\n 1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)\r\n 1852 elif not isinstance(feature, (Sequence, dict, list, tuple)):\r\n-> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n 1854 raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:1683, in _wrap_for_chunked_arrays..wrapper(array, *args, **kwargs)\r\n 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n 1682 else:\r\n-> 1683 return func(array, *args, **kwargs)\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/datasets\/table.py:1762, in array_cast(array, pa_type, allow_number_to_str)\r\n 1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):\r\n 1761 raise TypeError(f\"Couldn't cast array of type {array.type} to {pa_type}\")\r\n-> 1762 return array.cast(pa_type)\r\n 1763 raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{pa_type}\")\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/array.pxi:919, in pyarrow.lib.Array.cast()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/compute.py:389, in cast(arr, target_type, safe, options)\r\n 387 else:\r\n 388 options = CastOptions.safe(target_type)\r\n--> 389 return call_function(\"cast\", [arr], options)\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/_compute.pyx:560, in pyarrow._compute.call_function()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/_compute.pyx:355, in pyarrow._compute.Function.call()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nFile ~\/scratch\/scratch-env-39\/.venv\/lib\/python3.9\/site-packages\/pyarrow\/error.pxi:121, in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: macOS-12.5.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.4\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4981\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4981\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4980","id":1374868083,"node_id":"I_kwDODunzps5R8tJz","number":4980,"title":"Make `pyarrow` optional","user":{"login":"KOLANICH","id":240344,"node_id":"MDQ6VXNlcjI0MDM0NA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/240344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KOLANICH","html_url":"https:\/\/github.com\/KOLANICH","followers_url":"https:\/\/api.github.com\/users\/KOLANICH\/followers","following_url":"https:\/\/api.github.com\/users\/KOLANICH\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KOLANICH\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KOLANICH\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KOLANICH\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KOLANICH\/orgs","repos_url":"https:\/\/api.github.com\/users\/KOLANICH\/repos","events_url":"https:\/\/api.github.com\/users\/KOLANICH\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KOLANICH\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https:\/\/github.com\/huggingface\/datasets\/blob\/51aef08ad7053c0bfe8f9a961207b26df15850d3\/src\/datasets\/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rewrite \/ a different library with minimal functionality (datasets-lite ?)","Thanks for the proposal, @KOLANICH. And also thanks for your answer, @dconathan.\r\n\r\nIndeed, we are using `pyarrow` as the backend for our datasets, in order to cache them and also allow memory-mapping (using datasets larger than your RAM memory).\r\n\r\nOne way to avoid using `pyarrow` could be loading the datasets in streaming mode, by passing `streaming=True` to `load_dataset`. This way you basically get a generator for the dataset; nothing is downloaded, nor cached. ","Thanks for the info. Could `datasets` then be made optional for `transformers` instead? I used `transformers` only to deal with pretrained models to deploy them (convert to ONNX, and then I use TVM), so I don't really need `pyarrow` and `datasets` by now.\r\n"],"created_at":1663263483000,"updated_at":1663349027000,"closed_at":1663349027000,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nIs `pyarrow` really needed for every dataset?\r\n\r\n**Describe the solution you'd like**\r\nIt is made optional.\r\n\r\n**Describe alternatives you've considered**\r\nLikely, no.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4980\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4979","id":1374820758,"node_id":"PR_kwDODunzps4_CouM","number":4979,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663260663000,"updated_at":1663262062000,"closed_at":1663261929000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891\r\n- #4896\r\n- #4908\r\n- #4921\r\n- #4931","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4979\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4979","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4979","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4979.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4979.patch","merged_at":1663261929000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4978","id":1374271504,"node_id":"PR_kwDODunzps4_Axnh","number":4978,"title":"Update IndicGLUE download links","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663236357000,"updated_at":1663279220000,"closed_at":1663279054000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4978\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4978","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4978","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4978.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4978.patch","merged_at":1663279054000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4977","id":1372962157,"node_id":"I_kwDODunzps5R1b1t","number":4977,"title":"Providing dataset size","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets\/ --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in the middle of removing those JSON files and putting their information directly in the header of the `README.md` (as YAML tags). Normally, the CLI command should continue working but saving its output to the dataset card instead. See:\r\n- #4926","Additionally, the download size can be inferred by doing HEAD requests to the files to be downloaded. And for files hosted on the hub you can even get the file sizes using the Hub API","Amazing @albertvillanova ! I think just having that information visible in the dataset info (without having to do any requests\/additional coding) would be really useful :hugs: "],"created_at":1663160967000,"updated_at":1663257838000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nEspecially for big datasets like [LAION](https:\/\/huggingface.co\/datasets\/laion\/laion2B-en\/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded).\r\n\r\n**Describe the solution you'd like**\r\nAuto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some).\r\n\r\n**Describe alternatives you've considered**\r\nPeople should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face: \r\n\r\n**Additional context**\r\nMentioned to @lhoestq \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4977\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4976","id":1372322382,"node_id":"I_kwDODunzps5Ry_pO","number":4976,"title":"Hope to adapt Python3.9 as soon as possible","user":{"login":"RedHeartSecretMan","id":74012141,"node_id":"MDQ6VXNlcjc0MDEyMTQx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/74012141?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/RedHeartSecretMan","html_url":"https:\/\/github.com\/RedHeartSecretMan","followers_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/followers","following_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/orgs","repos_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/repos","events_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/RedHeartSecretMan\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?","There is this related issue already: https:\/\/github.com\/huggingface\/datasets\/issues\/4113\r\nAnd I guess we need a CI job for 3.9 ^^"],"created_at":1663130542000,"updated_at":1663256697000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is.\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n**Additional context**\r\nAdd any other context about the feature request here.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4976\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4975","id":1371703691,"node_id":"PR_kwDODunzps4-4NXX","number":4975,"title":"Add `fn_kwargs` param to `IterableDataset.map`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663085945000,"updated_at":1663087667000,"closed_at":1663087534000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add the `fn_kwargs` parameter to `IterableDataset.map`.\r\n\r\n(\"Resolves\" https:\/\/discuss.huggingface.co\/t\/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free\/22780\/3)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4975\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4975","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4975","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4975.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4975.patch","merged_at":1663087534000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4974","id":1371682020,"node_id":"PR_kwDODunzps4-4Iri","number":4974,"title":"[GH->HF] Part 2: Remove all dataset scripts from github","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4974). All of your documentation changes will be reflected on that endpoint.","So this means metrics will be deleted from this repo in favor of the \"evaluate\" library? Maybe you guys could just redirect metrics to that library."],"created_at":1663084872000,"updated_at":1663527425000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Now that all the datasets live on the Hub we can remove the \/datasets directory that contains all the dataset scripts of this repository\r\n\r\nNeeds https:\/\/github.com\/huggingface\/datasets\/pull\/4973 to be merged first\r\nand PR to be enabled on the Hub for non-namespaced datasets","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4974\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4974","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4974","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4974.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4974.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4973","id":1371600074,"node_id":"PR_kwDODunzps4-33JW","number":4973,"title":"[GH->HF] Load datasets from the Hub","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Duplicate of:\r\n- #4059"],"created_at":1663081301000,"updated_at":1663255611000,"closed_at":1663255466000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently datasets with no namespace (e.g. squad, glue) are loaded from github.\r\n\r\nIn this PR I changed this logic to use the Hugging Face Hub instead.\r\n\r\nThis is the first step in removing all the dataset scripts in this repository\r\n\r\nrelated to discussions in https:\/\/github.com\/huggingface\/datasets\/pull\/4059 (I should have continued from this PR actually)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4973\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4973","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4973","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4973.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4973.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4972","id":1371443306,"node_id":"PR_kwDODunzps4-3VVF","number":4972,"title":"Fix map batched with torch output","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4972). All of your documentation changes will be reflected on that endpoint."],"created_at":1663074994000,"updated_at":1663256568000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Reported in https:\/\/discuss.huggingface.co\/t\/typeerror-when-applying-map-after-set-format-type-torch\/23067\/2\r\n\r\nCurrently it fails if one uses batched `map` and the map function returns a torch tensor.\r\n\r\nI fixed it for torch, tf, jax and pandas series.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4972\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4972","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4972","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4972.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4972.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4971","id":1370319516,"node_id":"PR_kwDODunzps4-zk3g","number":4971,"title":"Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1663006104000,"updated_at":1663077068000,"closed_at":1663076925000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform.\r\n\r\nThis makes the behavior inconsistent with `IterableDataset.map`.\r\n \r\n(It seems this issue was introduced by mistake in https:\/\/github.com\/huggingface\/datasets\/pull\/2246) \r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4858","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4971\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4971","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4971","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4971.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4971.patch","merged_at":1663076924000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4970","id":1369433074,"node_id":"PR_kwDODunzps4-wkY2","number":4970,"title":"Support streaming nli_tr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662968925000,"updated_at":1662972304000,"closed_at":1662972188000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming nli_tr dataset.\r\n\r\nThis PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding.\r\n\r\nFix #3186.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4970\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4970","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4970","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4970.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4970.patch","merged_at":1662972188000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4969","id":1369334740,"node_id":"PR_kwDODunzps4-wPOk","number":4969,"title":"Fix data URL and metadata of vivos dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662963154000,"updated_at":1662966975000,"closed_at":1662966859000,"author_association":"MEMBER","active_lock_reason":null,"body":"After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https:\/\/doi.org\/10.5281\/zenodo.7068130\r\n\r\nThis PR updates their data URL and some metadata (homepage, citation and license).\r\n\r\nFix #4936.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4969\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4969","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4969","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4969.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4969.patch","merged_at":1662966859000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4968","id":1369312877,"node_id":"PR_kwDODunzps4-wKkw","number":4968,"title":"Support streaming compguesswhat dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662961344000,"updated_at":1662969606000,"closed_at":1662969486000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming `compguesswhat` dataset.\r\n\r\nFix #3191.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4968\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4968","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4968","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4968.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4968.patch","merged_at":1662969486000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4967","id":1369092452,"node_id":"PR_kwDODunzps4-vbS-","number":4967,"title":"Strip \"\/\" in local dataset path to avoid empty dataset name error","user":{"login":"apohllo","id":40543,"node_id":"MDQ6VXNlcjQwNTQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40543?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/apohllo","html_url":"https:\/\/github.com\/apohllo","followers_url":"https:\/\/api.github.com\/users\/apohllo\/followers","following_url":"https:\/\/api.github.com\/users\/apohllo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/apohllo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/apohllo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/apohllo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/apohllo\/orgs","repos_url":"https:\/\/api.github.com\/users\/apohllo\/repos","events_url":"https:\/\/api.github.com\/users\/apohllo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/apohllo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662937756000,"updated_at":1662996778000,"closed_at":1662996638000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4967\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4967","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4967","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4967.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4967.patch","merged_at":1662996638000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4965","id":1368661002,"node_id":"I_kwDODunzps5RlBwK","number":4965,"title":"[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()","user":{"login":"hoangtnm","id":35718590,"node_id":"MDQ6VXNlcjM1NzE4NTkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35718590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hoangtnm","html_url":"https:\/\/github.com\/hoangtnm","followers_url":"https:\/\/api.github.com\/users\/hoangtnm\/followers","following_url":"https:\/\/api.github.com\/users\/hoangtnm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hoangtnm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hoangtnm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hoangtnm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hoangtnm\/orgs","repos_url":"https:\/\/api.github.com\/users\/hoangtnm\/repos","events_url":"https:\/\/api.github.com\/users\/hoangtnm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hoangtnm\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.","Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?"],"created_at":1662825349000,"updated_at":1663426281000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI'm trying to run `cast_column(\"audio\", Audio())` on Apple M1 Pro, but it seems that it doesn't work.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\n\r\ndataset = load_dataset(\"csv\", data_files=\".\/train.csv\")[\"train\"]\r\ndataset = dataset.map(lambda x: {\"audio\": str(DATA_DIR \/ \"audio\" \/ x[\"audio\"])})\r\ndataset = dataset.cast_column(\"audio\", Audio())\r\ndataset[0]\r\n```\r\n\r\n## Expected results\r\n```\r\n{'audio': {'bytes': None,\r\n 'path': '\/root\/.cache\/huggingface\/datasets\/downloads\/extracted\/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c\/en-US~JOINT_ACCOUNT\/602ba55abb1e6d0fbce92065.wav'},\r\n 'english_transcription': 'I would like to set up a joint account with my partner',\r\n 'intent_class': 11,\r\n 'lang_id': 4,\r\n 'path': '\/root\/.cache\/huggingface\/datasets\/downloads\/extracted\/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c\/en-US~JOINT_ACCOUNT\/602ba55abb1e6d0fbce92065.wav',\r\n 'transcription': 'I would like to set up a joint account with my partner'}\r\n```\r\n\r\n\r\n## Actual results\r\n````---------------------------------------------------------------------------\r\nMemoryError Traceback (most recent call last)\r\nInput In [6], in ()\r\n----> 1 dataset[0]\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2165, in Dataset.__getitem__(self, key)\r\n 2163 def __getitem__(self, key): # noqa: F811\r\n 2164 \"\"\"Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).\"\"\"\r\n-> 2165 return self._getitem(\r\n 2166 key,\r\n 2167 )\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs)\r\n 2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)\r\n 2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n-> 2150 formatted_output = format_table(\r\n 2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns\r\n 2152 )\r\n 2153 return formatted_output\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)\r\n 530 python_formatter = PythonFormatter(features=None)\r\n 531 if format_columns is None:\r\n--> 532 return formatter(pa_table, query_type=query_type)\r\n 533 elif query_type == \"column\":\r\n 534 if key in format_columns:\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)\r\n 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:\r\n 280 if query_type == \"row\":\r\n--> 281 return self.format_row(pa_table)\r\n 282 elif query_type == \"column\":\r\n 283 return self.format_column(pa_table)\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:312, in PythonFormatter.format_row(self, pa_table)\r\n 310 row = self.python_arrow_extractor().extract_row(pa_table)\r\n 311 if self.decoded:\r\n--> 312 row = self.python_features_decoder.decode_row(row)\r\n 313 return row\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row)\r\n 220 def decode_row(self, row: dict) -> dict:\r\n--> 221 return self.features.decode_example(row) if self.features else row\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/features.py:1647, in Features.decode_example(self, example, token_per_repo_id)\r\n 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):\r\n 1635 \"\"\"Decode example with custom feature decoding.\r\n 1636 \r\n 1637 Args:\r\n (...)\r\n 1644 :obj:`dict[str, Any]`\r\n 1645 \"\"\"\r\n-> 1647 return {\r\n 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)\r\n 1649 if self._column_requires_decoding[column_name]\r\n 1650 else value\r\n 1651 for column_name, (feature, value) in zip_dict(\r\n 1652 {key: value for key, value in self.items() if key in example}, example\r\n 1653 )\r\n 1654 }\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/features.py:1648, in (.0)\r\n 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):\r\n 1635 \"\"\"Decode example with custom feature decoding.\r\n 1636 \r\n 1637 Args:\r\n (...)\r\n 1644 :obj:`dict[str, Any]`\r\n 1645 \"\"\"\r\n 1647 return {\r\n-> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)\r\n 1649 if self._column_requires_decoding[column_name]\r\n 1650 else value\r\n 1651 for column_name, (feature, value) in zip_dict(\r\n 1652 {key: value for key, value in self.items() if key in example}, example\r\n 1653 )\r\n 1654 }\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id)\r\n 1257 # Object with special decoding:\r\n 1258 elif isinstance(schema, (Audio, Image)):\r\n 1259 # we pass the token to read and decode files from private repositories in streaming mode\r\n-> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None\r\n 1261 return obj\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id)\r\n 154 array, sampling_rate = self._decode_non_mp3_file_like(file)\r\n 155 else:\r\n--> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)\r\n 157 return {\"path\": path, \"array\": array, \"sampling_rate\": sampling_rate}\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/datasets\/features\/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id)\r\n 254 use_auth_token = None\r\n 256 with xopen(path, \"rb\", use_auth_token=use_auth_token) as f:\r\n--> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)\r\n 258 return array, sampling_rate\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/librosa\/util\/decorators.py:88, in deprecate_positional_args.._inner_deprecate_positional_args..inner_f(*args, **kwargs)\r\n 86 extra_args = len(args) - len(all_args)\r\n 87 if extra_args <= 0:\r\n---> 88 return f(*args, **kwargs)\r\n 90 # extra_args > 0\r\n 91 args_msg = [\r\n 92 \"{}={}\".format(name, arg)\r\n 93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:])\r\n 94 ]\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/librosa\/core\/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type)\r\n 161 else:\r\n 162 # Otherwise try soundfile first, and then fall back if necessary\r\n 163 try:\r\n--> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype)\r\n 166 except RuntimeError as exc:\r\n 167 # If soundfile failed, try audioread instead\r\n 168 if isinstance(path, (str, pathlib.PurePath)):\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/librosa\/core\/audio.py:195, in __soundfile_load(path, offset, duration, dtype)\r\n 192 context = path\r\n 193 else:\r\n 194 # Otherwise, create the soundfile object\r\n--> 195 context = sf.SoundFile(path)\r\n 197 with context as sf_desc:\r\n 198 sr_native = sf_desc.samplerate\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)\r\n 626 self._mode = mode\r\n 627 self._info = _create_info_struct(file, mode, samplerate, channels,\r\n 628 format, subtype, endian)\r\n--> 629 self._file = self._open(file, mode_int, closefd)\r\n 630 if set(mode).issuperset('r+') and self.seekable():\r\n 631 # Move write position to 0 (like in Python file objects)\r\n 632 self.seek(0)\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd)\r\n 1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd)\r\n 1178 elif _has_virtual_io_attrs(file, mode_int):\r\n-> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file),\r\n 1180 mode_int, self._info, _ffi.NULL)\r\n 1181 else:\r\n 1182 raise TypeError(\"Invalid file: {0!r}\".format(self.name))\r\n\r\nFile ~\/miniconda3\/envs\/rodan\/lib\/python3.8\/site-packages\/soundfile.py:1197, in SoundFile._init_virtual_io(self, file)\r\n 1194 def _init_virtual_io(self, file):\r\n 1195 \"\"\"Initialize callback functions for sf_open_virtual().\"\"\"\r\n 1196 @_ffi.callback(\"sf_vio_get_filelen\")\r\n-> 1197 def vio_get_filelen(user_data):\r\n 1198 curr = file.tell()\r\n 1199 file.seek(0, SEEK_END)\r\n\r\nMemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https:\/\/cffi.readthedocs.io\/en\/latest\/using.html#callbacks\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.0\r\n- Platform: macOS-12.5.1-arm64-arm-64bit\r\n- Python version: 3.8.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.4","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4965\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4964","id":1368617322,"node_id":"I_kwDODunzps5Rk3Fq","number":4964,"title":"Column of arrays (2D+) are using unreasonably high memory","user":{"login":"vigsterkr","id":30353,"node_id":"MDQ6VXNlcjMwMzUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30353?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vigsterkr","html_url":"https:\/\/github.com\/vigsterkr","followers_url":"https:\/\/api.github.com\/users\/vigsterkr\/followers","following_url":"https:\/\/api.github.com\/users\/vigsterkr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vigsterkr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vigsterkr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vigsterkr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vigsterkr\/orgs","repos_url":"https:\/\/api.github.com\/users\/vigsterkr\/repos","events_url":"https:\/\/api.github.com\/users\/vigsterkr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vigsterkr\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above."],"created_at":1662815242000,"updated_at":1662815297000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import Dataset, Features, Array2D, Array3D\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype=\"float64\")}))\r\n```\r\n\r\nthe code above will use about 10Gb of RAM while constructing the `dataset` object.\r\n\r\nThe code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column.\r\n```python\r\nfrom datasets import Dataset\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data})\r\ndataset[column_name]\r\n```\r\n\r\n## Expected results\r\nSome memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening.\r\n\r\n## Actual results\r\nEnormous memory- and runtime overhead.\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: macOS-12.5.1-arm64-arm-64bit\r\n- Python version: 3.8.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.4","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4964\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4963","id":1368201188,"node_id":"I_kwDODunzps5RjRfk","number":4963,"title":"Dataset without script does not support regular JSON data file","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. "],"created_at":1662749133000,"updated_at":1662971727000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/julien-c\/label-studio-my-dogs\n\n### Description\n\n\"image\"\r\n\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4963\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4962","id":1368155365,"node_id":"PR_kwDODunzps4-sh-o","number":4962,"title":"Update setup.py","user":{"login":"DCNemesis","id":3616964,"node_id":"MDQ6VXNlcjM2MTY5NjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616964?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DCNemesis","html_url":"https:\/\/github.com\/DCNemesis","followers_url":"https:\/\/api.github.com\/users\/DCNemesis\/followers","following_url":"https:\/\/api.github.com\/users\/DCNemesis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DCNemesis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DCNemesis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DCNemesis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DCNemesis\/orgs","repos_url":"https:\/\/api.github.com\/users\/DCNemesis\/repos","events_url":"https:\/\/api.github.com\/users\/DCNemesis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DCNemesis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Before addressing this PR, we should be sure about the issue. See my comment in:\r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/4961#issuecomment-1243376247","Once we know 2022.8.2 works, I'm closing this PR, as the corresponding issue."],"created_at":1662746276000,"updated_at":1662993184000,"closed_at":1662993184000,"author_association":"NONE","active_lock_reason":null,"body":"exclude broken version of fsspec. See the [related issue](https:\/\/github.com\/huggingface\/datasets\/issues\/4961)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4962\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4962","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4962","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4962.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4962.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4961","id":1368124033,"node_id":"I_kwDODunzps5Ri-qB","number":4961,"title":"fsspec 2022.8.2 breaks xopen in streaming mode","user":{"login":"DCNemesis","id":3616964,"node_id":"MDQ6VXNlcjM2MTY5NjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616964?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DCNemesis","html_url":"https:\/\/github.com\/DCNemesis","followers_url":"https:\/\/api.github.com\/users\/DCNemesis\/followers","following_url":"https:\/\/api.github.com\/users\/DCNemesis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DCNemesis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DCNemesis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DCNemesis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DCNemesis\/orgs","repos_url":"https:\/\/api.github.com\/users\/DCNemesis\/repos","events_url":"https:\/\/api.github.com\/users\/DCNemesis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DCNemesis\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.","Opened [PR](https:\/\/github.com\/huggingface\/datasets\/pull\/4962) to address this.","Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https:\/\/github.com\/huggingface\/transformers\/pull\/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n","@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https:\/\/github.com\/googlecolab\/colabtools\/issues\/3055) on their end too. ","Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.","Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https:\/\/github.com\/googlecolab\/colabtools\/issues\/3055#issuecomment-1244019010"],"created_at":1662744415000,"updated_at":1663004750000,"closed_at":1662993125000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n\r\nimport datasets\r\n\r\ndata = datasets.load_dataset('MLCommons\/ml_spoken_words', 'id_wav', split='train', streaming=True)\r\n\r\n```\r\n\r\n## Expected results\r\nDataset should load as iterator.\r\n\r\n## Actual results\r\n```\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1737 # Return iterable dataset in case of streaming\r\n 1738 if streaming:\r\n-> 1739 return builder_instance.as_streaming_dataset(split=split)\r\n 1740 \r\n 1741 # Some datasets are already processed on the HF google storage\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py](https:\/\/localhost:8080\/#) in as_streaming_dataset(self, split, base_path)\r\n 1023 )\r\n 1024 self._check_manual_download(dl_manager)\r\n-> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1026 # By default, return all splits\r\n 1027 if split is None:\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MLCommons--ml_spoken_words\/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b\/ml_spoken_words.py](https:\/\/localhost:8080\/#) in _split_generators(self, dl_manager)\r\n 182 name=datasets.Split.TRAIN,\r\n 183 gen_kwargs={\r\n--> 184 \"audio_archives\": [download_audio(split=\"train\", lang=lang) for lang in self.config.languages],\r\n 185 \"local_audio_archives_paths\": [download_extract_audio(split=\"train\", lang=lang) for lang in\r\n 186 self.config.languages] if not dl_manager.is_streaming else None,\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MLCommons--ml_spoken_words\/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b\/ml_spoken_words.py](https:\/\/localhost:8080\/#) in (.0)\r\n 182 name=datasets.Split.TRAIN,\r\n 183 gen_kwargs={\r\n--> 184 \"audio_archives\": [download_audio(split=\"train\", lang=lang) for lang in self.config.languages],\r\n 185 \"local_audio_archives_paths\": [download_extract_audio(split=\"train\", lang=lang) for lang in\r\n 186 self.config.languages] if not dl_manager.is_streaming else None,\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MLCommons--ml_spoken_words\/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b\/ml_spoken_words.py](https:\/\/localhost:8080\/#) in _download_audio_archives(dl_manager, lang, format, split)\r\n 267 # for streaming case\r\n 268 def _download_audio_archives(dl_manager, lang, format, split):\r\n--> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split)\r\n 270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths]\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MLCommons--ml_spoken_words\/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b\/ml_spoken_words.py](https:\/\/localhost:8080\/#) in _download_audio_archives_paths(dl_manager, lang, format, split)\r\n 251 n_files_path = dl_manager.download(n_files_url)\r\n 252 \r\n--> 253 with open(n_files_path, \"r\", encoding=\"utf-8\") as file:\r\n 254 n_files = int(file.read().strip()) # the file contains a number of archives\r\n 255 \r\n\r\nValueError: I\/O operation on closed file.\r\n```\r\n\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4961\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4960","id":1368035159,"node_id":"I_kwDODunzps5Rio9X","number":4960,"title":"BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema'","user":{"login":"DSLituiev","id":8426290,"node_id":"MDQ6VXNlcjg0MjYyOTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8426290?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DSLituiev","html_url":"https:\/\/github.com\/DSLituiev","followers_url":"https:\/\/api.github.com\/users\/DSLituiev\/followers","following_url":"https:\/\/api.github.com\/users\/DSLituiev\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DSLituiev\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DSLituiev\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DSLituiev\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DSLituiev\/orgs","repos_url":"https:\/\/api.github.com\/users\/DSLituiev\/repos","events_url":"https:\/\/api.github.com\/users\/DSLituiev\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DSLituiev\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Following worked:\r\n\r\n```\r\ndata_dir = \"\/Users\/dlituiev\/repos\/datasets\/bioasq\/\"\r\nbioasq_task_b = load_dataset(\"aps\/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioasq_9b_source`); how can this be generalized to other datasets?\r\n- providing an actionable error message that lists available `name` values? I only got available `name` values once I've provided something there (`name=\"aps\/bioasq_task_b\"`), before it would not even mention that it requires `name` argument","Hi ! In general the list of available configurations is prompted. I think this is an issue with this specific dataset.\r\n\r\nFeel free to open a new discussions at https:\/\/huggingface.co\/datasets\/aps\/bioasq_task_b\/discussions\r\n\r\ncc @apsdehal\r\n\r\nIn particular it sounds like the `BUILDER_CONFIG_CLASS= BigBioConfig ` class attribute is missing and the _info should account for schema being None and raise an error"],"created_at":1662739603000,"updated_at":1663059063000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI am trying to load a dataset from drive and running into an error. \r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndata_dir = \"\/Users\/dlituiev\/repos\/datasets\/bioasq\/BioASQ-training9b\"\r\nbioasq_task_b = load_dataset(\"aps\/bioasq_task_b\", data_dir=data_dir)\r\n```\r\n\r\n## Actual results\r\n\r\n`AttributeError: 'BuilderConfig' object has no attribute 'schema'`\r\n\r\n
\r\n\r\n```\r\nUsing custom data configuration default-a1ca3e05be5abf2f\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nInput In [8], in ()\r\n 1 data_dir = \"\/Users\/dlituiev\/repos\/datasets\/bioasq\/BioASQ-training9b\"\r\n----> 2 bioasq_task_b = load_dataset(\"aps\/bioasq_task_b\", data_dir=data_dir)\r\n\r\nFile ~\/opt\/anaconda3\/envs\/spacy3\/lib\/python3.10\/site-packages\/datasets\/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1720 ignore_verifications = ignore_verifications or save_infos\r\n 1722 # Create a dataset builder\r\n-> 1723 builder_instance = load_dataset_builder(\r\n 1724 path=path,\r\n 1725 name=name,\r\n 1726 data_dir=data_dir,\r\n 1727 data_files=data_files,\r\n 1728 cache_dir=cache_dir,\r\n 1729 features=features,\r\n 1730 download_config=download_config,\r\n 1731 download_mode=download_mode,\r\n 1732 revision=revision,\r\n 1733 use_auth_token=use_auth_token,\r\n 1734 **config_kwargs,\r\n 1735 )\r\n 1737 # Return iterable dataset in case of streaming\r\n 1738 if streaming:\r\n\r\nFile ~\/opt\/anaconda3\/envs\/spacy3\/lib\/python3.10\/site-packages\/datasets\/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1523 raise ValueError(error_msg)\r\n 1525 # Instantiate the dataset builder\r\n-> 1526 builder_instance: DatasetBuilder = builder_cls(\r\n 1527 cache_dir=cache_dir,\r\n 1528 config_name=config_name,\r\n 1529 data_dir=data_dir,\r\n 1530 data_files=data_files,\r\n 1531 hash=hash,\r\n 1532 features=features,\r\n 1533 use_auth_token=use_auth_token,\r\n 1534 **builder_kwargs,\r\n 1535 **config_kwargs,\r\n 1536 )\r\n 1538 return builder_instance\r\n\r\nFile ~\/opt\/anaconda3\/envs\/spacy3\/lib\/python3.10\/site-packages\/datasets\/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)\r\n 1153 def __init__(self, *args, writer_batch_size=None, **kwargs):\r\n-> 1154 super().__init__(*args, **kwargs)\r\n 1155 # Batch size used by the ArrowWriter\r\n 1156 # It defines the number of samples that are kept in memory before writing them\r\n 1157 # and also the length of the arrow chunks\r\n 1158 # None means that the ArrowWriter will use its default value\r\n 1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE\r\n\r\nFile ~\/opt\/anaconda3\/envs\/spacy3\/lib\/python3.10\/site-packages\/datasets\/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)\r\n 305 if info is None:\r\n 306 info = self.get_exported_dataset_info()\r\n--> 307 info.update(self._info())\r\n 308 info.builder_name = self.name\r\n 309 info.config_name = self.config.name\r\n\r\nFile ~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/aps--bioasq_task_b\/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e\/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self)\r\n 474 def _info(self):\r\n 475 \r\n 476 # BioASQ Task B source schema\r\n--> 477 if self.config.schema == \"source\":\r\n 478 features = datasets.Features(\r\n 479 {\r\n 480 \"id\": datasets.Value(\"string\"),\r\n (...)\r\n 504 }\r\n 505 )\r\n 506 # simplified schema for QA tasks\r\n\r\nAttributeError: 'BuilderConfig' object has no attribute 'schema'\r\n```\r\n\r\n<\/details>\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.10.4\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4960\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4959","id":1367924429,"node_id":"PR_kwDODunzps4-rx6l","number":4959,"title":"Fix data URLs of compguesswhat dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662734170000,"updated_at":1662739294000,"closed_at":1662739144000,"author_association":"MEMBER","active_lock_reason":null,"body":"After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them:\r\n- https:\/\/github.com\/CompGuessWhat\/compguesswhat.github.io\/issues\/1\r\n\r\nThis PR updates their data URLs in our loading script.\r\n\r\nRelated to:\r\n- #3191","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4959\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4959","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4959","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4959.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4959.patch","merged_at":1662739144000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4958","id":1367695376,"node_id":"I_kwDODunzps5RhWAQ","number":4958,"title":"ConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/2.4.0\/datasets\/jsonl\/jsonl.py","user":{"login":"hasakikiki","id":66322047,"node_id":"MDQ6VXNlcjY2MzIyMDQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66322047?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hasakikiki","html_url":"https:\/\/github.com\/hasakikiki","followers_url":"https:\/\/api.github.com\/users\/hasakikiki\/followers","following_url":"https:\/\/api.github.com\/users\/hasakikiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hasakikiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hasakikiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hasakikiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hasakikiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/hasakikiki\/repos","events_url":"https:\/\/api.github.com\/users\/hasakikiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hasakikiki\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have solved this problem... The extension of the file should be `.json` not `.jsonl`"],"created_at":1662722995000,"updated_at":1662723524000,"closed_at":1662723524000,"author_association":"NONE","active_lock_reason":null,"body":"Hi,\r\nWhen I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version.\r\n\r\n```\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/2.3.0\/datasets\/jsonl\/jsonl.py (ConnectionError(MaxRetryError(\"HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: \/huggingface\/datasets\/2.3.0\/datasets\/jsonl\/jsonl.py (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 101] Network is unreachable'))\")))\r\n\r\n```\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4958\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4957","id":1366532849,"node_id":"PR_kwDODunzps4-nGIk","number":4957,"title":"Add `Dataset.from_generator`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I restarted the builder PR job just in case","_The documentation is not available anymore as the PR was closed or merged._","CI is now green. https:\/\/github.com\/huggingface\/doc-builder\/pull\/296 explains why it failed."],"created_at":1662649705000,"updated_at":1663339595000,"closed_at":1663339458000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism.\r\n\r\nCloses https:\/\/github.com\/huggingface\/datasets\/issues\/4417","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4957\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4957","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4957","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4957.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4957.patch","merged_at":1663339458000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4956","id":1366475160,"node_id":"PR_kwDODunzps4-m5NU","number":4956,"title":"Fix TF tests for 2.10","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662647950000,"updated_at":1662650211000,"closed_at":1662650084000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fixes #4953","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4956\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4956","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4956","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4956.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4956.patch","merged_at":1662650084000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4955","id":1366382314,"node_id":"I_kwDODunzps5RcVbq","number":4955,"title":"Raise a more precise error when the URL is unreachable in streaming mode","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662645157000,"updated_at":1662645216000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"See for example:\r\n\r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/3191\r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/3186\r\n\r\nIt would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently:\r\n\r\n- https:\/\/huggingface.co\/datasets\/compguesswhat\r\n\r\n \"Capture\r\n\r\n- https:\/\/huggingface.co\/datasets\/nli_tr\r\n\r\n \"Capture\r\n\r\n\r\ncc @albertvillanova ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4955\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4954","id":1366369682,"node_id":"PR_kwDODunzps4-mhl5","number":4954,"title":"Pin TensorFlow temporarily","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662644775000,"updated_at":1662646353000,"closed_at":1662646203000,"author_association":"MEMBER","active_lock_reason":null,"body":"Temporarily fix TensorFlow until a permanent solution is found.\r\n\r\nRelated to:\r\n- #4953","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4954\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4954","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4954","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4954.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4954.patch","merged_at":1662646203000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4953","id":1366356514,"node_id":"I_kwDODunzps5RcPIi","number":4953,"title":"CI test of TensorFlow is failing","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662644369000,"updated_at":1662650085000,"closed_at":1662650085000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nThe following CI test fails: https:\/\/github.com\/huggingface\/datasets\/runs\/8246722693?check_suite_focus=true\r\n```\r\nFAILED tests\/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError:\r\n```\r\n\r\nDetails:\r\n```\r\n_________________________ TempSeedTest.test_tensorflow _________________________\r\n[gw0] linux -- Python 3.7.13 \/opt\/hostedtoolcache\/Python\/3.7.13\/x64\/bin\/python\r\n\r\nself = \r\n\r\n @require_tf\r\n def test_tensorflow(self):\r\n import tensorflow as tf\r\n from tensorflow.keras import layers\r\n \r\n def gen_random_output():\r\n model = layers.Dense(2)\r\n x = tf.random.uniform((1, 3))\r\n return model(x).numpy()\r\n \r\n with temp_seed(42, set_tensorflow=True):\r\n out1 = gen_random_output()\r\n with temp_seed(42, set_tensorflow=True):\r\n out2 = gen_random_output()\r\n out3 = gen_random_output()\r\n \r\n> np.testing.assert_equal(out1, out2)\r\nE AssertionError: \r\nE Arrays are not equal\r\nE \r\nE Mismatched elements: 2 \/ 2 (100%)\r\nE Max absolute difference: 0.84619296\r\nE Max relative difference: 16.083529\r\nE x: array([[-0.793581, 0.333286]], dtype=float32)\r\nE y: array([[0.052612, 0.539708]], dtype=float32)\r\n\r\ntests\/test_py_utils.py:149: AssertionError\r\n```\r\n\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4953\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4952","id":1366354604,"node_id":"PR_kwDODunzps4-meM0","number":4952,"title":"Add test-datasets CI job","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Closing this one since the dataset scripts will be removed in https:\/\/github.com\/huggingface\/datasets\/pull\/4974"],"created_at":1662644310000,"updated_at":1663334882000,"closed_at":1663334748000,"author_association":"MEMBER","active_lock_reason":null,"body":"To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog\r\n\r\ntest does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts\r\n\r\nThis also makes `pip install -e .[dev]` much smaller for developers\r\n\r\nWDYT @albertvillanova ?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4952\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4952","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4952","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4952.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4952.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4951","id":1365954814,"node_id":"PR_kwDODunzps4-lDqd","number":4951,"title":"Fix license information in qasc dataset card","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662631479000,"updated_at":1662648887000,"closed_at":1662648725000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0:\r\n- https:\/\/github.com\/allenai\/qasc\/issues\/5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4951\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4951","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4951","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4951.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4951.patch","merged_at":1662648725000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4950","id":1365458633,"node_id":"PR_kwDODunzps4-jWZ1","number":4950,"title":"Update Enwik8 broken link and information","user":{"login":"mtanghu","id":54819091,"node_id":"MDQ6VXNlcjU0ODE5MDkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54819091?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mtanghu","html_url":"https:\/\/github.com\/mtanghu","followers_url":"https:\/\/api.github.com\/users\/mtanghu\/followers","following_url":"https:\/\/api.github.com\/users\/mtanghu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mtanghu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mtanghu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mtanghu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mtanghu\/orgs","repos_url":"https:\/\/api.github.com\/users\/mtanghu\/repos","events_url":"https:\/\/api.github.com\/users\/mtanghu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mtanghu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662606900000,"updated_at":1662648810000,"closed_at":1662648660000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The current enwik8 dataset link give a 502 bad gateway error which can be view on https:\/\/huggingface.co\/datasets\/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4950\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4950","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4950","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4950.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4950.patch","merged_at":1662648660000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4949","id":1365251916,"node_id":"PR_kwDODunzps4-iqzI","number":4949,"title":"Update enwik8 fixing the broken link","user":{"login":"mtanghu","id":54819091,"node_id":"MDQ6VXNlcjU0ODE5MDkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54819091?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mtanghu","html_url":"https:\/\/github.com\/mtanghu","followers_url":"https:\/\/api.github.com\/users\/mtanghu\/followers","following_url":"https:\/\/api.github.com\/users\/mtanghu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mtanghu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mtanghu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mtanghu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mtanghu\/orgs","repos_url":"https:\/\/api.github.com\/users\/mtanghu\/repos","events_url":"https:\/\/api.github.com\/users\/mtanghu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mtanghu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing pull request to following contributing guidelines of making a new branch and will make a new pull request"],"created_at":1662589034000,"updated_at":1662606844000,"closed_at":1662606844000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The current enwik8 dataset link give a 502 bad gateway error which can be view on https:\/\/huggingface.co\/datasets\/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4949\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4949","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4949","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4949.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4949.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4948","id":1364973778,"node_id":"PR_kwDODunzps4-hwsl","number":4948,"title":"Fix minor typo in error message for missing imports","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662571251000,"updated_at":1662649171000,"closed_at":1662649035000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4948\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4948","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4948","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4948.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4948.patch","merged_at":1662649035000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4947","id":1364967957,"node_id":"PR_kwDODunzps4-hvbq","number":4947,"title":"Try to fix the Windows CI after TF update 2.10","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4947). All of your documentation changes will be reflected on that endpoint."],"created_at":1662570889000,"updated_at":1662628390000,"closed_at":1662628390000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4947\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4947","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4947","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4947.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4947.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4946","id":1364692069,"node_id":"PR_kwDODunzps4-g0Hz","number":4946,"title":"Introduce regex check when pushing as well","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Let me take over this PR if you don't mind"],"created_at":1662558358000,"updated_at":1663064341000,"closed_at":1663064194000,"author_association":"MEMBER","active_lock_reason":null,"body":"Closes https:\/\/github.com\/huggingface\/datasets\/issues\/4945 by adding a regex check when pushing to hub.\r\n\r\nLet me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4946\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4946","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4946","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4946.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4946.patch","merged_at":1663064194000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4945","id":1364691096,"node_id":"I_kwDODunzps5RV4iY","number":4945,"title":"Push to hub can push splits that do not respect the regex","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662558317000,"updated_at":1663064195000,"closed_at":1663064195000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nThe `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n>>> from datasets import Dataset, DatasetDict, load_dataset\r\n\r\n>>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]})\r\n>>> di = DatasetDict()\r\n>>> di['identifier-with-column'] = d\r\n\r\n>>> di.push_to_hub('open-source-metrics\/test')\r\nPushing split identifier-with-column to the Hub.\r\nPushing dataset shards to the dataset hub: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:04<00:00, 4.40s\/it]\r\n```\r\n\r\nLoading it afterwards:\r\n```python\r\n>>> load_dataset('open-source-metrics\/test')\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 610\/610 [00:00<00:00, 432kB\/s]\r\nUsing custom data configuration open-source-metrics--test-28b63ec7cde80488\r\nDownloading and preparing dataset None\/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to \/home\/lysandre\/.cache\/huggingface\/datasets\/open-source-metrics___parquet\/open-source-metrics--test-28b63ec7cde80488\/0.0.0\/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...\r\nDownloading data files: 0%| | 0\/1 [00:00\", line 1, in \r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/load.py\", line 1746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 771, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/packaged_modules\/parquet\/parquet.py\", line 48, in _split_generators\r\n splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n File \"\", line 5, in __init__\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/splits.py\", line 599, in __post_init__\r\n NamedSplit(self.name) # check that it's a valid split name\r\n File \"\/home\/lysandre\/Workspaces\/python\/Metrics\/GitHub-Metrics\/.env\/lib\/python3.10\/site-packages\/datasets\/splits.py\", line 346, in __init__\r\n raise ValueError(f\"Split name should match '{_split_re}' but got '{split_name}'.\")\r\nValueError: Split name should match '^\\w+(\\.\\w+)*$' but got 'identifier-with-column'.\r\n```\r\n\r\n## Expected results\r\n\r\nI would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards.\r\n\r\n## Actual results\r\n\r\nSee above\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36\r\n- Python version: 3.10.6\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.4\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4945\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4944","id":1364313569,"node_id":"I_kwDODunzps5RUcXh","number":4944,"title":"larger dataset, larger GPU memory in the training phase? Is that correct?","user":{"login":"debby1103","id":38886373,"node_id":"MDQ6VXNlcjM4ODg2Mzcz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38886373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/debby1103","html_url":"https:\/\/github.com\/debby1103","followers_url":"https:\/\/api.github.com\/users\/debby1103\/followers","following_url":"https:\/\/api.github.com\/users\/debby1103\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/debby1103\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/debby1103\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/debby1103\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/debby1103\/orgs","repos_url":"https:\/\/api.github.com\/users\/debby1103\/repos","events_url":"https:\/\/api.github.com\/users\/debby1103\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/debby1103\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["does the trainer save it in GPU? sooo curious... how to fix it","It's my bad. didn't limit the input length"],"created_at":1662540390000,"updated_at":1662554098000,"closed_at":1662554098000,"author_association":"NONE","active_lock_reason":null,"body":" from datasets import set_caching_enabled\r\n set_caching_enabled(False)\r\n for ds_name in [\"squad\",\"newsqa\",\"nqopen\",\"narrativeqa\"]:\r\n train_ds = load_from_disk(\"..\/..\/..\/dall\/downstream\/processedproqa\/{}-train.hf\".format(ds_name))\r\n\r\n break\r\n train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1\r\n\r\n\r\n trainer = QuestionAnsweringTrainer( #huggingface trainer\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_ds,\r\n eval_dataset= None,\r\n eval_examples=None,\r\n answer_column_name=answer_column,\r\n dataset_name=\"squad\",\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics if training_args.predict_with_generate else None,\r\n )\r\n\r\nwith operation 1, the GPU memory increases from 16G to 23G","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4944\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4943","id":1363967650,"node_id":"PR_kwDODunzps4-eZd_","number":4943,"title":"Add splits to MBPP dataset","user":{"login":"cwarny","id":2788526,"node_id":"MDQ6VXNlcjI3ODg1MjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2788526?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cwarny","html_url":"https:\/\/github.com\/cwarny","followers_url":"https:\/\/api.github.com\/users\/cwarny\/followers","following_url":"https:\/\/api.github.com\/users\/cwarny\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cwarny\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cwarny\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cwarny\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cwarny\/orgs","repos_url":"https:\/\/api.github.com\/users\/cwarny\/repos","events_url":"https:\/\/api.github.com\/users\/cwarny\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cwarny\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["```\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mbpp\r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0\r\nrootdir: \/Users\/cwarny\/datasets, configfile: setup.cfg\r\ncollected 1 item \r\n\r\ntests\/test_dataset_common.py . [100%]\r\n\r\n================================================================================================= 1 passed in 1.12s ==================================================================================================\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_mbpp \r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0\r\nrootdir: \/Users\/cwarny\/datasets, configfile: setup.cfg\r\ncollected 1 item \r\n\r\ntests\/test_dataset_common.py . [100%]\r\n\r\n================================================================================================= 1 passed in 0.35s ==================================================================================================\r\n\r\n```","_The documentation is not available anymore as the PR was closed or merged._","Hi @cwarny ! Thanks for adding the correct splits :)\r\n\r\nYou can fix the CI error by running `make style` - this should reformat the dataset script","done"],"created_at":1662513511000,"updated_at":1663072159000,"closed_at":1663072041000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR addresses https:\/\/github.com\/huggingface\/datasets\/issues\/4795","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4943\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4943","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4943","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4943.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4943.patch","merged_at":1663072041000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4942","id":1363869421,"node_id":"I_kwDODunzps5RSv7t","number":4942,"title":"Trec Dataset has incorrect labels","user":{"login":"wmpauli","id":6539145,"node_id":"MDQ6VXNlcjY1MzkxNDU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6539145?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wmpauli","html_url":"https:\/\/github.com\/wmpauli","followers_url":"https:\/\/api.github.com\/users\/wmpauli\/followers","following_url":"https:\/\/api.github.com\/users\/wmpauli\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wmpauli\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wmpauli\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wmpauli\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wmpauli\/orgs","repos_url":"https:\/\/api.github.com\/users\/wmpauli\/repos","events_url":"https:\/\/api.github.com\/users\/wmpauli\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wmpauli\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @wmpauli. \r\n\r\nIndeed we recently fixed this issue:\r\n- #4801 \r\n\r\nThe fix will be accessible after our next library release. In the meantime, you can have it by passing `revision=\"main\"` to `load_dataset`."],"created_at":1662502420000,"updated_at":1662635523000,"closed_at":1662635523000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nBoth coarse and fine labels seem to be out of line.\r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = \"trec\"\r\nraw_datasets = load_dataset(dataset)\r\ndf = pd.DataFrame(raw_datasets[\"test\"])\r\ndf.head()\r\n```\r\n\r\n## Expected results\r\ntext (string) | coarse_label (class label) | fine_label (class label)\r\n-- | -- | --\r\nHow far is it from Denver to Aspen ? | 5 \t(NUM) | 40 \t(NUM:dist)\r\nWhat county is Modesto , California in ? | 4 \t(LOC) | 32 \t(LOC:city)\r\nWho was Galileo ? | 3 \t(HUM) | 31 \t(HUM:desc)\r\nWhat is an atom ? | 2 \t(DESC) | 24 \t(DESC:def)\r\nWhen did Hawaii become a state ? | 5 \t(NUM) | 39 \t(NUM:date)\r\n\r\n## Actual results\r\n index | label-coarse |label-fine | text\r\n-- |-- | -- | --\r\n0 | 4 | 40 | How far is it from Denver to Aspen ?\r\n1 | 5 | 21 | What county is Modesto , California in ?\r\n2 | 3 | 12 | Who was Galileo ?\r\n3 | 0 | 7 | What is an atom ?\r\n4 | 4 | 8 | When did Hawaii become a state ?\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27\r\n- Python version: 3.9.13\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4942\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4941","id":1363622861,"node_id":"PR_kwDODunzps4-dQ9F","number":4941,"title":"Add Papers with Code ID to scifact dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662486397000,"updated_at":1662488897000,"closed_at":1662488761000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- adds Papers with Code ID\r\n- forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https:\/\/github.com\/huggingface\/datasets\/runs\/8200223631?check_suite_focus=true","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4941\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4941","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4941","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4941.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4941.patch","merged_at":1662488761000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4940","id":1363513058,"node_id":"PR_kwDODunzps4-c6WY","number":4940,"title":"Fix multilinguality tag and missing sections in xquad_r dataset card","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662480335000,"updated_at":1662977467000,"closed_at":1662977328000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR fixes issue reported on the Hub:\r\n- Label as multilingual: https:\/\/huggingface.co\/datasets\/xquad_r\/discussions\/1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4940\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4940","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4940","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4940.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4940.patch","merged_at":1662977328000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4939","id":1363468679,"node_id":"PR_kwDODunzps4-cw4A","number":4939,"title":"Fix NonMatchingChecksumError in adv_glue dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662478276000,"updated_at":1662486130000,"closed_at":1662485956000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix issue reported on the Hub: https:\/\/huggingface.co\/datasets\/adv_glue\/discussions\/1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4939\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4939","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4939","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4939.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4939.patch","merged_at":1662485956000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4938","id":1363429228,"node_id":"PR_kwDODunzps4-coaB","number":4938,"title":"Remove main branch rename notice","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662476585000,"updated_at":1662482771000,"closed_at":1662482633000,"author_association":"MEMBER","active_lock_reason":null,"body":"We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)\r\n\r\nI also unpinned the github issue about the branch renaming","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4938\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4938","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4938","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4938.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4938.patch","merged_at":1662482633000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4937","id":1363426946,"node_id":"PR_kwDODunzps4-cn6W","number":4937,"title":"Remove deprecated identical_ok","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662476484000,"updated_at":1662503049000,"closed_at":1662502917000,"author_association":"MEMBER","active_lock_reason":null,"body":"`huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed:\r\n\r\n```python\r\nArgs:\r\n...\r\n identical_ok (`bool`, *optional*, defaults to `True`):\r\n Deprecated: will be removed in 0.11.0.\r\n Changing this value has no effect.\r\n...\r\n```\r\n\r\nThere was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same.\r\n\r\ncc @mariosasko ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4937\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4937","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4937","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4937.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4937.patch","merged_at":1662502917000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4936","id":1363274907,"node_id":"I_kwDODunzps5RQeyb","number":4936,"title":"vivos (Vietnamese speech corpus) dataset not accessible","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https:\/\/huggingface.co\/datasets\/indonesian-nlp\/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)","@cahya-wirawan omg this is awesome!! thank you! ","We have contacted the authors to ask them."],"created_at":1662470275000,"updated_at":1662966860000,"closed_at":1662966860000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nVIVOS data is not accessible anymore, neither of these links work (at least from France):\r\n* https:\/\/ailab.hcmus.edu.vn\/assets\/vivos.tar.gz (data)\r\n* https:\/\/ailab.hcmus.edu.vn\/vivos (dataset page) \r\n\r\nTherefore `load_dataset` doesn't work.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nds = load_dataset(\"vivos\")\r\n```\r\n\r\n## Expected results\r\ndataset loaded\r\n\r\n## Actual results\r\n```\r\nConnectionError: Couldn't reach https:\/\/ailab.hcmus.edu.vn\/assets\/vivos.tar.gz (ConnectionError(MaxRetryError(\"HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: \/assets\/vivos.tar.gz (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -5] No address associated with hostname'))\")))\r\n```\r\n\r\nWill try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https:\/\/github.com\/huggingface\/datasets\/pull\/4872), because it's small and straightforward and uses tar archives. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4936\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4935","id":1363226736,"node_id":"I_kwDODunzps5RQTBw","number":4935,"title":"Dataset Viewer issue for ubuntu_dialogs_corpus","user":{"login":"CibinQuadance","id":87330568,"node_id":"MDQ6VXNlcjg3MzMwNTY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/87330568?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/CibinQuadance","html_url":"https:\/\/github.com\/CibinQuadance","followers_url":"https:\/\/api.github.com\/users\/CibinQuadance\/followers","following_url":"https:\/\/api.github.com\/users\/CibinQuadance\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/CibinQuadance\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/CibinQuadance\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/CibinQuadance\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/CibinQuadance\/orgs","repos_url":"https:\/\/api.github.com\/users\/CibinQuadance\/repos","events_url":"https:\/\/api.github.com\/users\/CibinQuadance\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/CibinQuadance\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["The dataset maintainers (https:\/\/huggingface.co\/datasets\/ubuntu_dialogs_corpus) decided to forbid the dataset from being downloaded automatically (https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/loading#manual-download), and the dataset viewer respects this.\r\nWe will try to improve the error display though. Thanks for reporting."],"created_at":1662468110000,"updated_at":1662468685000,"closed_at":1662468685000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4935\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4934","id":1363034253,"node_id":"I_kwDODunzps5RPkCN","number":4934,"title":"Dataset Viewer issue for indonesian-nlp\/librivox-indonesia","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["The error is not related to the dataset viewer. I'm having a look...","Thanks @albertvillanova for checking the issue. Actually, I can use the dataset like following:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> ds=load_dataset(\"indonesian-nlp\/librivox-indonesia\")\r\nNo config specified, defaulting to: librivox-indonesia\/all\r\nReusing dataset librivox-indonesia (\/root\/.cache\/huggingface\/datasets\/indonesian-nlp___librivox-indonesia\/all\/1.0.0\/9a934a42bfb53dc103003d191618443b8a786bea2bd7bb0bc2d9454b8494521e)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 500.87it\/s]\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['path', 'language', 'reader', 'sentence', 'audio'],\r\n num_rows: 7815\r\n })\r\n})\r\n>>> ds[\"train\"][0]\r\n{'path': '\/root\/.cache\/huggingface\/datasets\/downloads\/extracted\/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3\/librivox-indonesia\/sundanese\/universal-declaration-of-human-rights\/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': {'path': '\/root\/.cache\/huggingface\/datasets\/downloads\/extracted\/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3\/librivox-indonesia\/sundanese\/universal-declaration-of-human-rights\/human_rights_un_sun_brc_0000.mp3', 'array': array([ 0. , 0. , 0. , ..., -0.02419001,\r\n -0.01957154, -0.01502833], dtype=float32), 'sampling_rate': 44100}}\r\n\r\n```\r\nIt would be just nice if I also can see it using dataset viewer.","Yes, the issue arises when streaming (that is used by the viewer): your script does not support streaming and to support it in this case there are some subtleties that we are explaining better in our docs in a work-in progress pull request:\r\n- #4872\r\n\r\nJust note that when streaming, `local_extracted_archive` is None, and this code line generates the error:\r\n```python\r\nfilepath = local_extracted_archive + \"\/librivox-indonesia\/audio_transcription.csv\"\r\n```\r\n\r\nFor a proper implementation, you could have a look at: https:\/\/huggingface.co\/datasets\/common_voice\/blob\/main\/common_voice.py\r\n\r\nYou can test your script locally by passing `streaming=True` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"indonesian-nlp\/librivox-indonesia\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\n```","Great, I will have a look and update the script. Thanks.","Hi @albertvillanova , I just add the streaming functionality and it works in the first try :-) Thanks a lot!","Awesome!!! :hugs: "],"created_at":1662458603000,"updated_at":1662468400000,"closed_at":1662468400000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/indonesian-nlp\/librivox-indonesia\n\n### Description\n\nI created a new speech dataset https:\/\/huggingface.co\/datasets\/indonesian-nlp\/librivox-indonesia, but the dataset preview doesn't work with following error message:\r\n```\r\nServer error\r\nStatus code: 400\r\nException: TypeError\r\nMessage: unsupported operand type(s) for +: 'NoneType' and 'str'\r\n```\r\nPlease help, I am not sure what the problem here is. Thanks a lot.\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4934\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4933","id":1363013023,"node_id":"I_kwDODunzps5RPe2f","number":4933,"title":"Dataset\/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.","user":{"login":"tianjianjiang","id":4812544,"node_id":"MDQ6VXNlcjQ4MTI1NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4812544?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tianjianjiang","html_url":"https:\/\/github.com\/tianjianjiang","followers_url":"https:\/\/api.github.com\/users\/tianjianjiang\/followers","following_url":"https:\/\/api.github.com\/users\/tianjianjiang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tianjianjiang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tianjianjiang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tianjianjiang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tianjianjiang\/orgs","repos_url":"https:\/\/api.github.com\/users\/tianjianjiang\/repos","events_url":"https:\/\/api.github.com\/users\/tianjianjiang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tianjianjiang\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n\r\nIn your case, something like\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instead?\r\nds_mc4_ja_2020 = ds_mc4_ja.filter(\r\n lambda batch: [timestamp[:4] == \"2020\" for timestamp in batch[\"timestamp\"]],\r\n batched=True,\r\n)\r\n```\r\n\r\nLet me know if it helps !","> Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n> [...]\r\n> Let me know if it helps !\r\n\r\nHi @lhoestq,\r\n\r\nAh, my bad, I totally forgot that part...\r\nSorry for the trouble and thank you for the kind help!"],"created_at":1662457668000,"updated_at":1662464667000,"closed_at":1662464667000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n`Dataset\/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.\r\n\r\n## Steps to reproduce the bug\r\n(In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instead?\r\nds_mc4_ja_2020 = ds_mc4_ja.filter(\r\n lambda example: example[\"timestamp\"][:4] == \"2020\",\r\n batched=True,\r\n)\r\n```\r\n\r\n## Expected results\r\nNo error\r\n\r\n## Actual results\r\n```python\r\n---------------------------------------------------------------------------\r\nRemoteTraceback Traceback (most recent call last)\r\nRemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 557, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 524, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 480, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2779, in _map_single\r\n offset=offset,\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2655, in apply_function_on_filtered_inputs\r\n processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2347, in decorated\r\n result = f(decorated_item, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 4946, in get_indices_from_mask_function\r\n indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]\r\nTypeError: zip argument #2 must support iteration\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTypeError Traceback (most recent call last)\r\n\/tmp\/ipykernel_51348\/2345782281.py in \r\n 7 batched=True,\r\n 8 # batch_size=10_000,\r\n----> 9 num_proc=111,\r\n 10 )\r\n 11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter(\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)\r\n 878 desc=desc,\r\n 879 )\r\n--> 880 for k, dataset in self.items()\r\n 881 }\r\n 882 )\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in (.0)\r\n 878 desc=desc,\r\n 879 )\r\n--> 880 for k, dataset in self.items()\r\n 881 }\r\n 882 )\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 522 }\r\n 523 # apply actual function\r\n--> 524 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 525 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 526 # re-apply format to the output\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 478 # Call actual function\r\n 479 \r\n--> 480 out = func(self, *args, **kwargs)\r\n 481 \r\n 482 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 2920 new_fingerprint=new_fingerprint,\r\n 2921 input_columns=input_columns,\r\n-> 2922 desc=desc,\r\n 2923 )\r\n 2924 new_dataset = copy.deepcopy(self)\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 2498 \r\n 2499 for index, async_result in results.items():\r\n-> 2500 transformed_shards[index] = async_result.get()\r\n 2501 \r\n 2502 assert (\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/multiprocess\/pool.py in get(self, timeout)\r\n 655 return self._value\r\n 656 else:\r\n--> 657 raise self._value\r\n 658 \r\n 659 def _set(self, i, obj):\r\n\r\nTypeError: zip argument #2 must support iteration\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12\r\n- Python version: 3.7.12\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.3.5\r\n\r\n(I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4933\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4932","id":1362522423,"node_id":"I_kwDODunzps5RNnE3","number":4932,"title":"Dataset Viewer issue for bigscience-biomedical\/biosses","user":{"login":"galtay","id":663051,"node_id":"MDQ6VXNlcjY2MzA1MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/663051?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/galtay","html_url":"https:\/\/github.com\/galtay","followers_url":"https:\/\/api.github.com\/users\/galtay\/followers","following_url":"https:\/\/api.github.com\/users\/galtay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/galtay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/galtay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/galtay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/galtay\/orgs","repos_url":"https:\/\/api.github.com\/users\/galtay\/repos","events_url":"https:\/\/api.github.com\/users\/galtay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/galtay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Possibly not related to the dataset viewer in itself. cc @huggingface\/datasets.\r\n\r\nIn particular, I think that the import of bigbiohub is not working here: https:\/\/huggingface.co\/datasets\/bigscience-biomedical\/biosses\/blob\/main\/biosses.py#L29 (requires a relative path?)\r\n\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n>>> get_dataset_config_names('bigscience-biomedical\/biosses')\r\nDownloading builder script: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8.00k\/8.00k [00:00<00:00, 7.47MB\/s]\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 289, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1247, in dataset_module_factory\r\n raise e1 from None\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1220, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithScript(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 931, in get_module\r\n local_imports = _download_additional_modules(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 215, in _download_additional_modules\r\n raise ImportError(\r\nImportError: To be able to use bigscience-biomedical\/biosses, you need to install the following dependency: bigbiohub.\r\nPlease install it using 'pip install bigbiohub' for instance'\r\n```","Opened a PR here to (hopefully) fix the dataset script: https:\/\/huggingface.co\/datasets\/bigscience-biomedical\/biosses\/discussions\/1\/files","thanks for taking a look @severo . agree this isn't related to dataset viewer (sorry just clicked on the auto issue creator). also thanks @lhoestq , I see the format to use for relative imports. was a bit confused b\/c it seems to be working here \r\n\r\nhttps:\/\/huggingface.co\/datasets\/bigscience-biomedical\/scitail\/blob\/main\/scitail.py#L31\r\n\r\nI'll try this PR a see what happens. ","closing as I think the issue is relative imports and attempting to read json files directly in the repo (thanks again @lhoestq ) "],"created_at":1662417632000,"updated_at":1662474296000,"closed_at":1662474296000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/bigscience-biomedical\/biosses\n\n### Description\n\nI've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) . \r\n```\r\nStatus code: 400\r\nException: ModuleNotFoundError\r\nMessage: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub'\r\n```\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4932\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4931","id":1362298764,"node_id":"PR_kwDODunzps4-Y3L6","number":4931,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1662397384000,"updated_at":1662442880000,"closed_at":1662442769000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891\r\n- #4896\r\n- #4908\r\n- #4921","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4931\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4931","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4931","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4931.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4931.patch","merged_at":1662442769000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4930","id":1362193587,"node_id":"PR_kwDODunzps4-Yflc","number":4930,"title":"Add cc-by-nc-2.0 to list of licenses","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","this list needs to be kept in sync with the ones in moon-landing and hub-docs :)","@julien-c don't you think it might be better to a have a single file (source of truth) in one of the repos and then use it in every other repo, instead of having 3 copies of the same file that must be kept in sync?\r\n\r\nAlso note that the licenses we are adding were all already present in our previous `licenses.json` file: are we regenerating it, step by step? Why don't we use a file with ALL the licenses we previously had in the list?\r\n\r\nLicenses added:\r\n- #4887\r\n- #4930 \r\n\r\nPrevious `licenses.json` file:\r\n- https:\/\/github.com\/huggingface\/datasets\/blob\/b7612754928e0fd43b9e3c3becb906ec280ff5d4\/src\/datasets\/utils\/resources\/licenses.json\r\n- removed in this commit: https:\/\/github.com\/huggingface\/datasets\/pull\/4613\/commits\/9f7725412dac1089b3e057f9e3fcf39cc222bc26\r\n\r\nLet me know what you think and I can take care of this.","> Let me know what you think and I can take care of this.\r\n\r\nWhat I think is that we shouldn't add licenses that are just used in a couple of datasets, and just use `license_details` for this.\r\n\r\n> don't you think it might be better to a have a single file (source of truth) in one of the repos and then use it in every other repo, instead of having 3 copies of the same file that must be kept in sync?\r\n\r\nYes, in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now? \r\n","Feel free to delete the license list in `datasets` @albertvillanova ;)\r\n\r\nAlso FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)"],"created_at":1662392252000,"updated_at":1662482612000,"closed_at":1662397264000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https:\/\/github.com\/allenai\/scifact\/blob\/master\/LICENSE.md","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4930\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4930","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4930","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4930.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4930.patch","merged_at":1662397264000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4929","id":1361508366,"node_id":"PR_kwDODunzps4-WK2w","number":4929,"title":"Fixes a typo in loading documentation","user":{"login":"sighingnow","id":7144772,"node_id":"MDQ6VXNlcjcxNDQ3NzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7144772?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sighingnow","html_url":"https:\/\/github.com\/sighingnow","followers_url":"https:\/\/api.github.com\/users\/sighingnow\/followers","following_url":"https:\/\/api.github.com\/users\/sighingnow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sighingnow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sighingnow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sighingnow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sighingnow\/orgs","repos_url":"https:\/\/api.github.com\/users\/sighingnow\/repos","events_url":"https:\/\/api.github.com\/users\/sighingnow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sighingnow\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662362334000,"updated_at":1662430263000,"closed_at":1662383198000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"As show in the [documentation page](https:\/\/huggingface.co\/docs\/datasets\/loading) here the `\"tr\"in` should be `\"train`.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/7144772\/188390445-e1f04d54-e3e3-4762-8686-63ecbe4087e5.png)\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4929\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4929","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4929","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4929.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4929.patch","merged_at":1662383198000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4928","id":1360941172,"node_id":"PR_kwDODunzps4-Ubi4","number":4928,"title":"Add ability to read-write to SQL databases.","user":{"login":"Dref360","id":8976546,"node_id":"MDQ6VXNlcjg5NzY1NDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8976546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Dref360","html_url":"https:\/\/github.com\/Dref360","followers_url":"https:\/\/api.github.com\/users\/Dref360\/followers","following_url":"https:\/\/api.github.com\/users\/Dref360\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Dref360\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Dref360\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Dref360\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Dref360\/orgs","repos_url":"https:\/\/api.github.com\/users\/Dref360\/repos","events_url":"https:\/\/api.github.com\/users\/Dref360\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Dref360\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4928). All of your documentation changes will be reflected on that endpoint.","Ah CI runs with `pandas=1.3.5` which doesn't return the number of row inserted.","wow this is super cool!","@lhoestq I'm getting error in integration tests, not sure if it's related to my PR. Any help would be appreciated :) \r\n\r\n```\r\nif not self._is_valid_token(token):\r\n> raise ValueError(\"Invalid token passed!\")\r\nE ValueError: Invalid token passed!\r\n```","I just relaunched the tests, it should be fixed now","Thanks a lot for working on this!\r\n\r\nI have some concerns with the current design:\r\n* Besides SQLite, the loader should also work with the other engines supported by SQLAlchemy. (A better name for it in the current state would be `sqlite` :))\r\n* It should support arbitrary queries\/table names - only the latter currently works.\r\n* Exposing this loader as a packaged builder (`load_dataset(\"sql\", ...)`) is not a good idea for the following reasons:\r\n * Considering the scenario where a table with the same name is present in multiple files is very unlikely, the data files resolution is not needed here. And if we remove that, what the name of the default split should be? \"train\"?\r\n * `load_dataset(\"sql\", ...)` also implies that streaming should work, but that's not the case. And I don't think we can change that, considering how hard it is to make SQLite files streamable.\r\n\r\nAll this makes me think we shouldn't expose this builder as a packaged module and, instead, limit the API to `Dataset.from_sql`\/`Dataset.to_sql` (with the signatures matching the ones in pandas as much as possible; regarding this, note that SQLAlchemy connections are not hashable\/picklable, which is required for caching, but I think it's OK only to allow URI strings as connections to bypass that (Dask has the same limitation).\r\n\r\nWDYT?","Hi @mariosasko thank you for your review.\r\n\r\nI agree that `load_dataset('sql',...)` is a bit weird and I would be happy to remove it. To be honest, I only added it when I saw that it was the preferred way in `loading.mdx`. \r\n\r\nI agree that the `SELECT` should be a parameters as well. I'll add it.\r\n\r\nSo far, only `Dataset.to_sql` explicitly supports any SQLAlchemy Connexion, I'm pretty sure that `Dataset.from_sql` would work with a Connexion as well, but it would break the typing from the parent class which is `path_or_paths: NestedDataStructureLike[PathLike]`. I would prefer not to break this API Contract.\r\n\r\n\r\nI will have time to work on this over the weekend. Please let me know what you think if I do the following:\r\n* Remove `load_dataset('sql', ...)` and edit the documentation to use `to_sql, from_sql`.\r\n* Tentatively make `Dataset.from_sql` typing work with SQLAlchemy Connexion.\r\n* Add support for custom queries (Default would be `SELECT * FROM {table_name}`).\r\n\r\nCheers!","Perhaps after we merge https:\/\/github.com\/huggingface\/datasets\/pull\/4957 (**Done!**), you can subclass `AbstractDatasetInputStream` instead of `AbstractDatasetReader` to not break the contract with the connection object. Also, let's avoid having the default value for the query\/table (you can set it to `None` in the builder and raise an error in the builder config's `__post_init__` if it's not provided). Other than that, sounds good!"],"created_at":1662232148000,"updated_at":1663512582000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fixes #3094 \r\n\r\nAdd ability to read\/write to SQLite files and also read from any SQL database supported by SQLAlchemy.\r\n\r\nI didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional. \r\n\r\nI also recorded a Loom to showcase the feature.\r\n\r\nhttps:\/\/www.loom.com\/share\/f0e602c2de8a46f58bca4b43333d541f","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/reactions","total_count":8,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":4,"rocket":0,"eyes":2},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4928\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4928","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4928","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4928.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4928.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4927","id":1360428139,"node_id":"PR_kwDODunzps4-S0we","number":4927,"title":"fix BLEU metric card","user":{"login":"antoniolanza1996","id":40452030,"node_id":"MDQ6VXNlcjQwNDUyMDMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40452030?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/antoniolanza1996","html_url":"https:\/\/github.com\/antoniolanza1996","followers_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/followers","following_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/orgs","repos_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/repos","events_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/antoniolanza1996\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662138056000,"updated_at":1662740895000,"closed_at":1662740895000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I've fixed some typos in BLEU metric card.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4927\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4927","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4927","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4927.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4927.patch","merged_at":1662740895000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4926","id":1360384484,"node_id":"PR_kwDODunzps4-Srm1","number":4926,"title":"Dataset infos in yaml","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4926). All of your documentation changes will be reflected on that endpoint.","Alright this is ready for review :)\r\nI mostly would like your opinion on the YAML structure and what we can do in the docs (IMO we can add the docs about those fields in the Hub docs). Other than that let me know if the changes in info.py and features.py look good to you"],"created_at":1662135005000,"updated_at":1663004114000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place.\r\n\r\nTo be more specific, I moved these fields from DatasetInfo to the YAML:\r\n- config_name (if there are several configs)\r\n- download_size\r\n- dataset_size\r\n- features\r\n- splits\r\n\r\nHere is what I ended up with for `squad`:\r\n```yaml\r\ndataset_infos:\r\n features:\r\n - name: id\r\n dtype: string\r\n - name: title\r\n dtype: string\r\n - name: context\r\n dtype: string\r\n - name: question\r\n dtype: string\r\n - name: answers\r\n sequence:\r\n - name: text\r\n dtype: string\r\n - name: answer_start\r\n dtype: int32\r\n splits:\r\n - name: train\r\n num_bytes: 79346360\r\n num_examples: 87599\r\n - name: validation\r\n num_bytes: 10473040\r\n num_examples: 10570\r\n download_size: 35142551\r\n dataset_size: 89819400\r\n```\r\n\r\nand it can be a list if there are several configs\r\n\r\nI already did the change for `conll2000` and `crime_and_punish` as an example.\r\n\r\n## Implementation details\r\n\r\n### Load\/Read\r\n\r\nThis is done via `DatasetInfoDict.write_to_directory\/from_directory`\r\n\r\nI had to implement a custom the YAML export logic for `SplitDict`, `Version` and `Features`.\r\nThe first two are trivial, but the logic for `Features` is more complicated, because I added a simplification step (or the YAML would be too long and less readable): it's just a formatting step to remove unnecessary nesting of YAML data.\r\n\r\n### Other changes\r\n\r\nI had to update the DatasetModule factories to also download the README.md alongside the dataset scripts\/data files, and not just the dataset_infos.json\r\n\r\n## YAML validation\r\n\r\nI removed the old validation code that was in metadata.py, now we can just use the Hub YAML validation\r\n\r\n## Datasets-cli\r\n\r\nThe `datasets-cli test --save_infos` command now creates a README.md file with the dataset_infos in it, instead of a datasets_infos.json file\r\n\r\n## Backward compatibility\r\n\r\n`dataset_infos.json` files are still supported and loaded if they exist to have full backward compatibility.\r\nThough I removed the unnecessary keys when the value is the default (like all the `id: null` from the Value feature types) to make them easier to read.\r\n\r\n## TODO\r\n\r\n- [x] add comments\r\n- [x] tests\r\n- [ ] document the new YAML fields (to be done in the Hub docs)\r\n- [x] try to reload the new dataset_infos.json file content with an old version of `datasets`\r\n\r\n## EDITS\r\n\r\n- removed \"config_name\" when there's only one config\r\n- removed \"version\" for now (?), because it's not useful in general\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4876","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4926\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4926","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4926","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4926.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4926.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4925","id":1360007616,"node_id":"PR_kwDODunzps4-RbP5","number":4925,"title":"Add note about loading image \/ audio files to docs","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4925). All of your documentation changes will be reflected on that endpoint.","Thanks for the feedback @polinaeterna ! I've reworded the docs a bit to integrate your comments and this should be ready for another review :)","> I've just realized that there is another PR about audio documentation open: #4872\r\n> and there the more detailed description on how to use `audiofolder` is moved to another section (\"Create an audio dataset\")\r\n\r\nAh yes, let's add a comment to #4872 - that will be simpler than the alternatives :)","@polinaeterna @lhoestq What do you think about adding support for the metadata format from Kaggle (one metadata file for each split with the name equal to the split name) to ImageFolder\/AudioFolder? I also think we can relax some requirements a bit by:\r\n* allowing `filename` as the name of the main metadata column (currently, only `file_path` is allowed)\r\n* not requiring that the features of all the given metadata files are equal. Instead, we can have a soft check by using `_check_if_features_can_be_aligned` + `_align_features`. The rationale is that train\/val metadata often has extra columns compared to test metadata.\r\n\r\nThese changes would allow us to load the Kaggle dataset linked in the forum thread without any \"interventions\".\r\n\r\nPS: this metadata format for ImageFolder was also proposed by @abhishekkrthakur initially.\r\n","Can you give more details about the Kaggle format ? I'm down to discuss it in a separate issue if you don't mind.\r\n\r\n> allowing filename as the name of the main metadata column (currently, only file_path is allowed)\r\n\r\n`filename` refers to the name of the file, so there's no logic about relative path or directories. If I recall correctly this is what we're doing right now so why not\r\n\r\n> not requiring that the features of all the given metadata files are equal. Instead, we can have a soft check by using _check_if_features_can_be_aligned + _align_features. The rationale is that train\/val metadata often has extra columns compared to test metadata.\r\n\r\n+1 and we can set to None the missing features","I'm not sure if this is worth opening a new issue :).\r\n\r\nWhat I mean by the Kaggle format is the structure like this one (the name of a metadata file is equal to the directory it \"references\"):\r\n```\r\n- train\r\n - img1.jpeg\r\n - img2.jpeg\r\n - ...\r\n- test\r\n - img1.jpeg\r\n - img2.jpeg\r\n - ... \r\n- train.csv\r\n- test.csv\r\n```\r\n\r\n\r\n","Sounds nice !"],"created_at":1662114718000,"updated_at":1663345231000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds a small note about how to load image \/ audio datasets that have multiple splits in their dataset structure.\r\n\r\nRelated forum thread: https:\/\/discuss.huggingface.co\/t\/loading-train-and-test-splits-with-audiofolder\/22447\r\n\r\ncc @NielsRogge ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4925\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4925","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4925","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4925.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4925.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4924","id":1358611513,"node_id":"I_kwDODunzps5Q-sQ5","number":4924,"title":"Concatenate_datasets loads everything into RAM","user":{"login":"louisdeneve","id":39416047,"node_id":"MDQ6VXNlcjM5NDE2MDQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39416047?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/louisdeneve","html_url":"https:\/\/github.com\/louisdeneve","followers_url":"https:\/\/api.github.com\/users\/louisdeneve\/followers","following_url":"https:\/\/api.github.com\/users\/louisdeneve\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/louisdeneve\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/louisdeneve\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/louisdeneve\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/louisdeneve\/orgs","repos_url":"https:\/\/api.github.com\/users\/louisdeneve\/repos","events_url":"https:\/\/api.github.com\/users\/louisdeneve\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/louisdeneve\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1662027917000,"updated_at":1662033054000,"closed_at":1662033054000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ngcs = gcsfs.GCSFileSystem(project='project')\r\ndatasets = [load_from_disk(f'path\/to\/slice\/of\/data\/{i}', fs=gcs, keep_in_memory=False) for i in range(10)]\r\n\r\ndataset = concatenate_datasets(datasets)\r\n```\r\n\r\n## Expected results\r\nA concatenated dataset which is stored on my disk.\r\n\r\n## Actual results\r\nConcatenated dataset gets loaded into RAM and overflows it which gets the process killed.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10\r\n- Python version: 3.8.13\r\n- PyArrow version: 8.0.1\r\n- Pandas version: 1.4.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4924\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4923","id":1357735287,"node_id":"PR_kwDODunzps4-Jv7C","number":4923,"title":"WIP: decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround ","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4923). All of your documentation changes will be reflected on that endpoint.","Thanks ! Should we still support torchaudio>0.12 if it works ? And if it doesn't we can explain that downgrading is the right solution, or alternatively use librosa","@lhoestq \r\n\r\n> Should we still support torchaudio>0.12 if it works ? And if it doesn't we can explain that downgrading is the right solution, or alternatively use librosa\r\n\r\nI'm not sure here, because from the one hand, if `torchaudio` works - it works 60 times faster then `librosa`.\r\nBut from the other hand, we will get inconsistent behavior (=different results of decoding) for users of `torchaudio>=0.12`. \r\nI'd better go for using `librosa` only to avoid inconsistency then. wdyt?","It seems a bit too constraining to not allow users who have a working torchaudio 0.12 setup to not use it. \r\n\r\nIf the issue is about avoiding silent errors if the decoding changes, maybe we can log which back-end is used ? It can even be a warning with performance suggestions (\"you're using librosa but torchaudio 0.xx is recommended\").\r\n\r\nNote that users can still have a requirements.txt or whatever in their projects if they really want full reproducibility (and it's the bare minimum imo)\r\n\r\nThere are multiple possible back-ends so it's maybe not reasonable to only allow one back-end, especially since each back-end has installation constrains and there's no \"best\" back-end."],"created_at":1661972279000,"updated_at":1663267585000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"`torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works)\r\n\r\nanother option would be to ask users to install the required version of `ffmpeg`, but is non-trivial on colab: it's not in apt packages in ubuntu 18 and `conda` is not preinstalled (with `conda` it would be easily installable)\r\n\r\n- [x] decode with torchaudio anyway if the version of ffmpeg is correct? it's 60 times faster\r\n- [ ] tests \r\n- [ ] ... \r\n\r\nsee https:\/\/github.com\/huggingface\/datasets\/issues\/4776 and https:\/\/github.com\/huggingface\/datasets\/issues\/3663#issuecomment-1225797165 (there is a Colab notebook to reproduce the error)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4923\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4923","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4923","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4923.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4923.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4922","id":1357684018,"node_id":"I_kwDODunzps5Q7J0y","number":4922,"title":"I\/O error on Google Colab in streaming mode","user":{"login":"jotterbach","id":5595043,"node_id":"MDQ6VXNlcjU1OTUwNDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5595043?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jotterbach","html_url":"https:\/\/github.com\/jotterbach","followers_url":"https:\/\/api.github.com\/users\/jotterbach\/followers","following_url":"https:\/\/api.github.com\/users\/jotterbach\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jotterbach\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jotterbach\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jotterbach\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jotterbach\/orgs","repos_url":"https:\/\/api.github.com\/users\/jotterbach\/repos","events_url":"https:\/\/api.github.com\/users\/jotterbach\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jotterbach\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1661969306000,"updated_at":1661969748000,"closed_at":1661969748000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen trying to load a streaming dataset in Google Colab the loading fails with an I\/O error\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\nfrom datasets import load_dataset\r\nhf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)\r\nlist(hf_ds.take(5))\r\n```\r\n\r\n## Expected results\r\nIt should load five data points\r\n\r\n## Actual results\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in \r\n 2 from datasets import load_dataset\r\n 3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)\r\n----> 4 list(hf_ds.take(5))\r\n\r\n6 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py](https:\/\/localhost:8080\/#) in __iter__(self)\r\n 716 \r\n 717 def __iter__(self):\r\n--> 718 for key, example in self._iter():\r\n 719 if self.features:\r\n 720 # `IterableDataset` automatically fills missing columns with None.\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py](https:\/\/localhost:8080\/#) in _iter(self)\r\n 706 else:\r\n 707 ex_iterable = self._ex_iterable\r\n--> 708 yield from ex_iterable\r\n 709 \r\n 710 def _iter_shard(self, shard_idx: int):\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py](https:\/\/localhost:8080\/#) in __iter__(self)\r\n 582 \r\n 583 def __iter__(self):\r\n--> 584 yield from islice(self.ex_iterable, self.n)\r\n 585 \r\n 586 def shuffle_data_sources(self, generator: np.random.Generator) -> \"TakeExamplesIterable\":\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py](https:\/\/localhost:8080\/#) in __iter__(self)\r\n 110 \r\n 111 def __iter__(self):\r\n--> 112 yield from self.generate_examples_fn(**self.kwargs)\r\n 113 \r\n 114 def shuffle_data_sources(self, generator: np.random.Generator) -> \"ExamplesIterable\":\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt19\/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0\/wmt_utils.py](https:\/\/localhost:8080\/#) in _generate_examples(self, split_subsets, extraction_map, with_translation)\r\n 845 raise ValueError(\"Invalid number of files: %d\" % len(files))\r\n 846 \r\n--> 847 for sub_key, ex in sub_generator(*sub_generator_args):\r\n 848 if not all(ex.values()):\r\n 849 continue\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt19\/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0\/wmt_utils.py](https:\/\/localhost:8080\/#) in _parse_parallel_sentences(f1, f2, filename1, filename2)\r\n 923 l2_sentences, l2 = parse_file(f2_i, filename2)\r\n 924 \r\n--> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)):\r\n 926 key = f\"{f_id}\/{line_id}\"\r\n 927 yield key, {l1: s1, l2: s2}\r\n\r\n[~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt19\/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0\/wmt_utils.py](https:\/\/localhost:8080\/#) in gen()\r\n 895 \r\n 896 def gen():\r\n--> 897 with open(path, encoding=\"utf-8\") as f:\r\n 898 for line in f:\r\n 899 seg_match = re.match(seg_re, line)\r\n\r\nValueError: I\/O operation on closed file.\r\n```\r\n\r\n## Environment info\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0)\r\n- Pandas version: 1.3.5\r\n\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4922\/timeline","performed_via_github_app":null,"state_reason":"not_planned","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4921","id":1357609003,"node_id":"PR_kwDODunzps4-JVFV","number":4921,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661964747000,"updated_at":1662008808000,"closed_at":1662008693000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891\r\n- #4896\r\n- #4908","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4921\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4921","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4921","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4921.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4921.patch","merged_at":1662008693000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4920","id":1357564589,"node_id":"I_kwDODunzps5Q6sqt","number":4920,"title":"Unable to load local tsv files through load_dataset method","user":{"login":"DataNoob0723","id":44038517,"node_id":"MDQ6VXNlcjQ0MDM4NTE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44038517?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DataNoob0723","html_url":"https:\/\/github.com\/DataNoob0723","followers_url":"https:\/\/api.github.com\/users\/DataNoob0723\/followers","following_url":"https:\/\/api.github.com\/users\/DataNoob0723\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DataNoob0723\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DataNoob0723\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DataNoob0723\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DataNoob0723\/orgs","repos_url":"https:\/\/api.github.com\/users\/DataNoob0723\/repos","events_url":"https:\/\/api.github.com\/users\/DataNoob0723\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DataNoob0723\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV\/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/package_reference\/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_files=data_files)\r\n``` "],"created_at":1661962419000,"updated_at":1662010290000,"closed_at":1662010290000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nUnable to load local tsv files through load_dataset method.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\ndata_files = {\r\n 'train': 'train.tsv',\r\n 'test': 'test.tsv'\r\n}\r\nraw_datasets = load_dataset('tsv', data_files=data_files)\r\n\r\n## Expected results\r\nI am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions.\r\n\r\n## Actual results\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in \r\n----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv')\r\n\r\n2 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1244 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1245 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n-> 1246 ) from None\r\n 1247 raise e1 from None\r\n 1248 else:\r\n\r\nFileNotFoundError: Couldn't find a dataset script at \/content\/tsv\/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/main\/datasets\/tsv\/tsv.py\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4920\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4919","id":1357441599,"node_id":"PR_kwDODunzps4-IxDZ","number":4919,"title":"feat: improve error message on Keys mismatch. closes #4917","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","We are having an unrelated issue that makes several tests fail. We are working on that. Once fixed, you will be able to merge the main branch into this, so that you get the fix and the tests pass..."],"created_at":1661956896000,"updated_at":1662367561000,"closed_at":1662367413000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hi @lhoestq what do you think?\r\n\r\nLet me give you a code sample:\r\n```py\r\n>>> import datasets\r\n>>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]})\r\n>>> foo.save_to_disk('foo')\r\n# edit foo\/dataset_info.json e.g. rename the 'foo' feature to 'baz'\r\n>>> datasets.load_from_disk('foo')\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n in \r\n----> 1 datasets.load_from_disk('foo')\r\n\r\n~\/code\/datasets\/src\/datasets\/load.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1851 raise FileNotFoundError(f\"Directory {dataset_path} not found\")\r\n 1852 if fs.isfile(Path(dest_dataset_path, config.DATASET_INFO_FILENAME).as_posix()):\r\n-> 1853 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1854 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):\r\n 1855 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n\r\n~\/code\/datasets\/src\/datasets\/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1230 info=dataset_info,\r\n 1231 split=split,\r\n-> 1232 fingerprint=state[\"_fingerprint\"],\r\n 1233 )\r\n 1234 \r\n\r\n~\/code\/datasets\/src\/datasets\/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)\r\n 687 self.info.features = inferred_features\r\n 688 else: # make sure the nested columns are in the right order\r\n--> 689 self.info.features = self.info.features.reorder_fields_as(inferred_features)\r\n 690 \r\n 691 # Infer fingerprint if None\r\n\r\n~\/code\/datasets\/src\/datasets\/features\/features.py in reorder_fields_as(self, other)\r\n 1771 return source\r\n 1772 \r\n-> 1773 return Features(recursive_reorder(self, other))\r\n 1774 \r\n 1775 def flatten(self, max_depth=16) -> \"Features\":\r\n\r\n~\/code\/datasets\/src\/datasets\/features\/features.py in recursive_reorder(source, target, stack)\r\n 1760 f\"{source.keys()-target.keys()} are missing from dataset.arrow \"\r\n 1761 f\"and {target.keys()-source.keys()} are missing from dataset_info.json\"+stack_position)\r\n-> 1762 raise ValueError(message)\r\n 1763 return {key: recursive_reorder(source[key], target[key], stack + f\".{key}\") for key in target}\r\n 1764 elif isinstance(source, list):\r\n\r\nValueError: Keys mismatch: between {'baz': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (dataset_info.json) and {'foo': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (inferred from dataset.arrow).\r\n{'baz'} are missing from dataset.arrow and {'foo'} are missing from dataset_info.json\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4919\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4919","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4919","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4919.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4919.patch","merged_at":1662367413000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4918","id":1357242757,"node_id":"I_kwDODunzps5Q5eGF","number":4918,"title":"Dataset Viewer issue for pysentimiento\/spanish-targeted-sentiment-headlines","user":{"login":"finiteautomata","id":167943,"node_id":"MDQ6VXNlcjE2Nzk0Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/167943?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/finiteautomata","html_url":"https:\/\/github.com\/finiteautomata","followers_url":"https:\/\/api.github.com\/users\/finiteautomata\/followers","following_url":"https:\/\/api.github.com\/users\/finiteautomata\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/finiteautomata\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/finiteautomata\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/finiteautomata\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/finiteautomata\/orgs","repos_url":"https:\/\/api.github.com\/users\/finiteautomata\/repos","events_url":"https:\/\/api.github.com\/users\/finiteautomata\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/finiteautomata\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, it's fixed now (I refreshed it manually). It's a known issue; we hope it will be fixed permanently in a few days.\r\n\r\n\"Capture\r\n","Thanks @severo! "],"created_at":1661947747000,"updated_at":1662413794000,"closed_at":1662395564000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/pysentimiento\/spanish-targeted-sentiment-headlines\n\n### Description\n\nAfter moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist.\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4918\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4917","id":1357193841,"node_id":"I_kwDODunzps5Q5SJx","number":4917,"title":"Keys mismatch: make error message more informative","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good idea ! I think this can be improved in `Features.reorder_fields_as()` indeed at\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7feeb5648a63b6135a8259dedc3b1e19185ee4c7\/src\/datasets\/features\/features.py#L1739-L1740\r\n\r\nIs it something you would be interested in contributing ?","Is this open to work on? I'd love to take on this as my first issue.","Hi @daspartho I\u2019ve opened a PR #4919 \r\nI don\u2019t think there\u2019s much left to do","ok : )"],"created_at":1661945074000,"updated_at":1662367418000,"closed_at":1662367418000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nWhen loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don\u2019t know when\/why\/how this happens but it deserves its own issue), you will get an error message like:\r\n`ValueError: Keys mismatch: between {'bar': Value(dtype='int64', id=None)} and {'foo': Value(dtype='int64', id=None)}`\r\n\r\nWhich is fine when you have only a few features like in the example but it gets very hard to read when you have a lot of features in your dataset.\r\n\r\n**Describe the solution you'd like**\r\nThe error message should give the difference between the features (what keys are in A but missing in B and vice-versa). It should also tell which keys are inferred from `dataset.arrow` and which come from `dataset_info.json`.\r\n\r\nWilling to help :)\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4917\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4916","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4916\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4916\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4916\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4916","id":1357076940,"node_id":"I_kwDODunzps5Q41nM","number":4916,"title":"Apache Beam unable to write the downloaded wikipedia dataset","user":{"login":"Shilpac20","id":71849081,"node_id":"MDQ6VXNlcjcxODQ5MDgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/71849081?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Shilpac20","html_url":"https:\/\/github.com\/Shilpac20","followers_url":"https:\/\/api.github.com\/users\/Shilpac20\/followers","following_url":"https:\/\/api.github.com\/users\/Shilpac20\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Shilpac20\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Shilpac20\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Shilpac20\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Shilpac20\/orgs","repos_url":"https:\/\/api.github.com\/users\/Shilpac20\/repos","events_url":"https:\/\/api.github.com\/users\/Shilpac20\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Shilpac20\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["See:\r\n- #4915"],"created_at":1661938765000,"updated_at":1661943199000,"closed_at":1661943199000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nHi, I am currently trying to download wikipedia dataset using\r\nload_dataset(\"wikipedia\", language=\"aa\", date=\"20220401\", split=\"train\",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"wikipedia\", language=\"aa\", date=\"20220401\", split=\"train\",beam_runner='DirectRunner')\r\n```\r\n\r\n## Expected results\r\nto load the dataset\r\n\r\n## Actual results\r\nI am pasting the error trace here:\r\nDownloading builder script: 35.9kB [00:00, ?B\/s]\r\nDownloading metadata: 30.4kB [00:00, 1.94MB\/s]\r\nUsing custom data configuration 20220401.aa-date=20220401,language=aa\r\nDownloading and preparing dataset wikipedia\/20220401.aa to C:\\Users\\Shilpa.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 11.1k\/11.1k [00:00<00:00, 712kB\/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:02<00:00, 2.82s\/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00 You can find the full list of languages and dates [here](https:\/\/dumps.wikimedia.org\/backup-index.html).\r\n\r\nThis means that, before passing a specific date, you should first make sure it is available online, as Wikimedia only keeps last X months (depending on the size of the corresponding language dump)): e.g. to see which dates \"aa\" Wikipedia is available online, see https:\/\/dumps.wikimedia.org\/aawiki\/ (as of today 2022-08-31, the available dates are from [20220401](https:\/\/dumps.wikimedia.org\/aawiki\/20220401\/) to [20220820](https:\/\/dumps.wikimedia.org\/aawiki\/20220820\/)).","Hi, the date that I have specified \"20220401\" is available for the language \"aa\". The error persists for any other available dates as present in https:\/\/dumps.wikimedia.org\/aawiki\/. The error is mainly due to apache beam not able to write the downloaded files. Any help on this?","I see, sorry, I misread your issue.\r\n\r\nWe are investigating this."],"created_at":1661876146000,"updated_at":1661943175000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nHi, I am currently trying to download wikipedia dataset using \r\nload_dataset(\"wikipedia\", language=\"aa\", date=\"20220401\", split=\"train\",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download.\r\n\r\n\r\nEnvironment:\r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"wikipedia\", language=\"aa\", date=\"20220401\", split=\"train\",beam_runner='DirectRunner')\r\n```\r\n\r\n## Expected results\r\nto load the dataset\r\n\r\n## Actual results\r\nI am pasting the error trace here:\r\nDownloading builder script: 35.9kB [00:00, ?B\/s]\r\nDownloading metadata: 30.4kB [00:00, 1.94MB\/s]\r\nUsing custom data configuration 20220401.aa-date=20220401,language=aa\r\nDownloading and preparing dataset wikipedia\/20220401.aa to C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 11.1k\/11.1k [00:00<00:00, 712kB\/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:02<00:00, 2.82s\/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00\r\n beam_runner='DirectRunner')\r\n File \"G:\\Python3.7\\lib\\site-packages\\datasets\\load.py\", line 1751, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"G:\\Python3.7\\lib\\site-packages\\datasets\\builder.py\", line 705, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"G:\\Python3.7\\lib\\site-packages\\datasets\\builder.py\", line 1394, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\pipeline.py\", line 574, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\direct\\direct_runner.py\", line 131, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 201, in run_pipeline\r\n options)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 212, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 443, in run_stages\r\n runner_execution_context, bundle_context_manager, bundle_input)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 776, in _execute_bundle\r\n bundle_manager))\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 1000, in _run_bundle\r\n data_input, data_output, input_timers, expected_timer_output)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\fn_runner.py\", line 1309, in process_bundle\r\n result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\portability\\fn_api_runner\\worker_handlers.py\", line 380, in push\r\n response = self.worker.do_instruction(request)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\worker\\sdk_worker.py\", line 598, in do_instruction\r\n getattr(request, request_type), request.instruction_id)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\worker\\sdk_worker.py\", line 635, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\worker\\bundle_processor.py\", line 1004, in process_bundle\r\n element.data)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\runners\\worker\\bundle_processor.py\", line 227, in process_encoded\r\n self.output(decoded_value)\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 526, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 528, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 905, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"apache_beam\\runners\\common.py\", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"apache_beam\\runners\\common.py\", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 907, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\worker\\operations.py\", line 908, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\\runners\\common.py\", line 1419, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\\runners\\common.py\", line 1417, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\\runners\\common.py\", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\\runners\\common.py\", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"apache_beam\\runners\\common.py\", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\iobase.py\", line 1193, in process\r\n self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\options\\value_provider.py\", line 193, in _f\r\n return fnc(self, *args, **kwargs)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\filebasedsink.py\", line 202, in open_writer\r\n return FileBasedSinkWriter(self, writer_path)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\filebasedsink.py\", line 419, in __init__\r\n self.temp_handle = self.sink.open(temp_shard_path)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\parquetio.py\", line 553, in open\r\n self._file_handle = super().open(temp_path)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\options\\value_provider.py\", line 193, in _f\r\n return fnc(self, *args, **kwargs)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\filebasedsink.py\", line 139, in open\r\n temp_path, self.mime_type, self.compression_type)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\filesystems.py\", line 224, in create\r\n return filesystem.create(path, mime_type, compression_type)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\localfilesystem.py\", line 163, in create\r\n return self._path_open(path, 'wb', mime_type, compression_type)\r\n File \"G:\\Python3.7\\lib\\site-packages\\apache_beam\\io\\localfilesystem.py\", line 140, in _path_open\r\n raw_file = io.open(path, mode)\r\nRuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\\\Users\\\\Shilpa\\\\.cache\\\\huggingface\\\\datasets\\\\wikipedia\\\\20220401.aa-date=20220401,language=aa\\\\2.0.0\\\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train\/Save to parquet\/Write\/WriteImpl\/WriteBundles']\r\n\r\n## Environment info\r\nPython: 3.7.6\r\nWindows 10 Pro\r\ndatasets :2.4.0\r\napache_beam: 2.41.0\r\nmwparserfromhell: 0.6.4\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4915\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4915\/timeline","performed_via_github_app":null,"state_reason":"reopened","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4914","id":1355482624,"node_id":"PR_kwDODunzps4-CFyN","number":4914,"title":"Support streaming swda dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661852788000,"updated_at":1661858193000,"closed_at":1661858056000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming swda dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4914\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4914","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4914","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4914.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4914.patch","merged_at":1661858055000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4913","id":1355232007,"node_id":"PR_kwDODunzps4-BP00","number":4913,"title":"Add license and citation information to cosmos_qa dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661840599000,"updated_at":1661852971000,"closed_at":1661852855000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0.\r\n\r\nThis PR also updates the citation information.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4913\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4913","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4913","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4913.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4913.patch","merged_at":1661852855000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4912","id":1355078864,"node_id":"I_kwDODunzps5QxNzQ","number":4912,"title":"datasets map() handles all data at a stroke and takes long time","user":{"login":"BruceStayHungry","id":40711748,"node_id":"MDQ6VXNlcjQwNzExNzQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40711748?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BruceStayHungry","html_url":"https:\/\/github.com\/BruceStayHungry","followers_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/followers","following_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/orgs","repos_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/repos","events_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BruceStayHungry\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Interesting question ;)\r\n\r\n> Which is better? Process in map() or in data-collator\r\n\r\nAs you said, both can be used in practice: map() if you want to preprocess before training, or a data-collator (or the equivalent `dataset.set_transform`) if you want to preprocess on-the-fly during training. Both options are great and really depend on your case.\r\n\r\nTo choose between the two, here are IMO the main caveats of each approach:\r\n- if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n- on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\n> Why huggingface advises map() function? There should be some advantages to using map()\r\n\r\nTo get the best throughput when training a model, it is often recommended to preprocess your dataset before training. Note that preprocessing may include other steps before tokenization such as data filtering, cleaning, chunking etc. which are often done before training.","Thanks for your clear explanation @lhoestq ! \r\n> * if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n> * on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\nI really agree with you. There should be some trade-off between processing before and during the train loop.\r\nBesides, I find `map()` function can cache the results once it has been executed. Very useful!","I'm closing this issue if you don't mind, feel free to reopen if needed ;)"],"created_at":1661826356000,"updated_at":1662456215000,"closed_at":1662456215000,"author_association":"NONE","active_lock_reason":null,"body":"**1. Background**\r\n\r\nHuggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop. \r\n\r\nThe corresponding code:\r\n```\r\nwith accelerator.main_process_first():\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not args.overwrite_cache,\r\n desc=\"Running tokenizer on every text in dataset\"\r\n )\r\n```\r\n\r\n**2. The problem**\r\n\r\nThus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize.\r\n\r\nAlso, we can choose to tokenize data in `data-collator`. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization.\r\n\r\n**3. My question**\r\n\r\nAs described above, my questions are:\r\n* **Which is better? Process in `map()` or in `data-collator`**\r\n* **Why huggingface advises `map()` function?** There should be some advantages to using `map()`\r\n\r\n\r\nThanks for your answers!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4912\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4911","id":1354426978,"node_id":"I_kwDODunzps5Quupi","number":4911,"title":"[Tests] Ensure `datasets` supports renamed repositories","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":3761482852,"node_id":"LA_kwDODunzps7gM6xk","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20second%20issue","name":"good second issue","color":"BDE59C","default":false,"description":"Issues a bit more difficult than \"Good First\" issues"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You could also switch to using `huggingface_hub` more directly, where such a guarantee is already tested =)\r\n\r\ncc @Wauplin "],"created_at":1661784374000,"updated_at":1661787063000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"On https:\/\/hf.co\/datasets you can rename a dataset (or sometimes move it to another user\/org). The website handles redirections correctly and AFAIK `datasets` does as well.\r\n\r\nHowever it would be nice to have an integration test to make sure we don't break support for renamed datasets.\r\n\r\nTo implement this we can use the \/api\/repos\/move endpoint on hub-ci to rename\/move a repo (it is documented at https:\/\/huggingface.co\/docs\/hub\/api)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4911\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4910","id":1354374328,"node_id":"I_kwDODunzps5Quhy4","number":4910,"title":"Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder()","user":{"login":"bablf","id":57184353,"node_id":"MDQ6VXNlcjU3MTg0MzUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57184353?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bablf","html_url":"https:\/\/github.com\/bablf","followers_url":"https:\/\/api.github.com\/users\/bablf\/followers","following_url":"https:\/\/api.github.com\/users\/bablf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bablf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bablf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bablf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bablf\/orgs","repos_url":"https:\/\/api.github.com\/users\/bablf\/repos","events_url":"https:\/\/api.github.com\/users\/bablf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bablf\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"open","locked":false,"assignee":{"login":"thepurpleowl","id":21123710,"node_id":"MDQ6VXNlcjIxMTIzNzEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21123710?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thepurpleowl","html_url":"https:\/\/github.com\/thepurpleowl","followers_url":"https:\/\/api.github.com\/users\/thepurpleowl\/followers","following_url":"https:\/\/api.github.com\/users\/thepurpleowl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thepurpleowl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thepurpleowl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thepurpleowl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thepurpleowl\/orgs","repos_url":"https:\/\/api.github.com\/users\/thepurpleowl\/repos","events_url":"https:\/\/api.github.com\/users\/thepurpleowl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thepurpleowl\/received_events","type":"User","site_admin":false},"assignees":[{"login":"thepurpleowl","id":21123710,"node_id":"MDQ6VXNlcjIxMTIzNzEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21123710?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thepurpleowl","html_url":"https:\/\/github.com\/thepurpleowl","followers_url":"https:\/\/api.github.com\/users\/thepurpleowl\/followers","following_url":"https:\/\/api.github.com\/users\/thepurpleowl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thepurpleowl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thepurpleowl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thepurpleowl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thepurpleowl\/orgs","repos_url":"https:\/\/api.github.com\/users\/thepurpleowl\/repos","events_url":"https:\/\/api.github.com\/users\/thepurpleowl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thepurpleowl\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https:\/\/huggingface.co\/docs\/datasets\/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0","In my case, this was happening because I defined multiple `BuilderConfig` for multiple types, but didn't had all the data files that are requierd by those configs. \r\n\r\nI think this is different than the original issue by @bablf .","Hi ! I think this can be fixed by letting the config_kwargs take over the builder kwargs here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7feeb5648a63b6135a8259dedc3b1e19185ee4c7\/src\/datasets\/load.py#L1533-L1534\r\n\r\nmaybe something like this ?\r\n```python\r\n **{**builder_kwargs, **config_kwargs}\r\n```\r\n\r\nLet me know if you'd like to contribute and fix this bug, so I can assign you :)\r\n\r\n> In my case, this was happening because I defined multiple BuilderConfig for multiple types, but didn't had all the data files that are requierd by those configs.\r\n> \r\n> I think this is different than the original issue by @bablf .\r\n\r\nFeel free to to open an new issue, I'd be happy to help\r\n","@lhoestq Yeah, I want to, please assign.","Cool thank you ! Let me know if you have questions or if I can help","@lhoestq On second thoughts, I think this might be expected behavior; although a better error message might help.\r\n\r\nReasoning: Given n configs, if no data file is provided for any config, then it should be an error. Then why it should not be the case if out of n configs, for some data files are provided but not for others. Also, I was using `--all_configs` flag with `dataset-cli test`.","Ok I see - maybe we should check the values of builder_kwargs raise an error if any key in config_kwargs tries to overwrite it ? The builder kwargs are determined from the builder's type and location (in some cases it forces the base_path, data_files and config name for example)"],"created_at":1661782308000,"updated_at":1663070326000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nIn `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError(\"type object got multiple values for keyword argument \"xyz\"). \r\n\r\nI ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be \r\n```python\r\nbuilder_cls = import_main_class(dataset_module.module_path)\r\nbuilder_kwargs = dataset_module.builder_kwargs\r\ndata_files = builder_kwargs.pop(\"data_files\", data_files)\r\nconfig_name = builder_kwargs.pop(\"config_name\", name)\r\nhash = builder_kwargs.pop(\"hash\")\r\nbase_path = builder_kwargs.pop(\"base_path\")\r\n```\r\nand then pass base_path into `builder_cls`.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"rotten_tomatoes\", base_path=\".\/sample_data\")\r\n```\r\n\r\n## Expected results\r\nThe docs state: `**config_kwargs` \u2014 Keyword arguments to be passed to the [BuilderConfig](https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/package_reference\/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/package_reference\/builder_classes#datasets.DatasetBuilder).\r\n\r\nSo I would expect to be able to pass the base_path into `load_dataset()`. \r\n## Actual results\r\nTypeError(\"type object got multiple values for keyword argument \"base_path\"). \r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: macOS-12.5-arm64-arm-64bit\r\n- Python version: 3.8.9\r\n- PyArrow version: 9.0.0\r\n\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4910\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4909","id":1353997788,"node_id":"PR_kwDODunzps499Fhe","number":4909,"title":"Update GLUE evaluation metadata","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661766224000,"updated_at":1661784809000,"closed_at":1661784678000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR updates the evaluation metadata for GLUE to:\r\n\r\n* Include defaults for all configs except `ax` (which only has a `test` split with no known labels)\r\n* Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private)\r\n* Fix the `task_id` for some existing defaults\r\n\r\ncc @sashavor @douwekiela ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4909\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4909","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4909","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4909.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4909.patch","merged_at":1661784678000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4908","id":1353995574,"node_id":"PR_kwDODunzps499FDS","number":4908,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661766113000,"updated_at":1661789729000,"closed_at":1661789587000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891\r\n- #4896","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4908\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4908","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4908","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4908.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4908.patch","merged_at":1661789587000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4907","id":1353808348,"node_id":"I_kwDODunzps5QsXnc","number":4907,"title":"None Type error for swda datasets","user":{"login":"hannan72","id":8229163,"node_id":"MDQ6VXNlcjgyMjkxNjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8229163?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hannan72","html_url":"https:\/\/github.com\/hannan72","followers_url":"https:\/\/api.github.com\/users\/hannan72\/followers","following_url":"https:\/\/api.github.com\/users\/hannan72\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hannan72\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hannan72\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hannan72\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hannan72\/orgs","repos_url":"https:\/\/api.github.com\/users\/hannan72\/repos","events_url":"https:\/\/api.github.com\/users\/hannan72\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hannan72\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?","Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.","Ok, let us know if you encounter the issue again ;)"],"created_at":1661756720000,"updated_at":1661870621000,"closed_at":1661870621000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI got `'NoneType' object is not callable` error while calling the swda datasets.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"swda\")\r\n```\r\n\r\n## Expected results\r\nRun without error\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Python version: 3.8.10\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4907\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4906","id":1353223925,"node_id":"I_kwDODunzps5QqI71","number":4906,"title":"Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)","user":{"login":"OPterminator","id":63536981,"node_id":"MDQ6VXNlcjYzNTM2OTgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/63536981?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/OPterminator","html_url":"https:\/\/github.com\/OPterminator","followers_url":"https:\/\/api.github.com\/users\/OPterminator\/followers","following_url":"https:\/\/api.github.com\/users\/OPterminator\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/OPterminator\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/OPterminator\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/OPterminator\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/OPterminator\/orgs","repos_url":"https:\/\/api.github.com\/users\/OPterminator\/repos","events_url":"https:\/\/api.github.com\/users\/OPterminator\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/OPterminator\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `datasets` library.\r\n - You could try to rename it and run it again.\r\n- Another cause could be the simultaneous use of the packages `nlp` and `datasets`. Please note that we renamed the Hugging Face `nlp` library to `datasets` more than 2 years ago: they are 2 versions of the same library.\r\n - Please try to update your script and use only `datasets` (`nlp` name is no longer in use and is out of date)."],"created_at":1661653404000,"updated_at":1661750596000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\nNot able to import datasets \r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\nimport os\r\nos.environ[\"WANDB_API_KEY\"] = \"0\" ## to silence warning\r\nimport numpy as np\r\nimport random\r\nimport sklearn\r\nimport matplotlib.pyplot as plt\r\nimport pandas as pd\r\nimport sys\r\nimport tensorflow as tf\r\nimport plotly.express as px\r\nimport transformers\r\nimport tokenizers\r\nimport nlp as nlp\r\nimport utils\r\nimport datasets\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\nimport should work normal\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n in \r\n 13 import nlp as nlp\r\n 14 import utils\r\n---> 15 import datasets\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\__init__.py in \r\n 44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled\r\n 45 from .info import DatasetInfo, MetricInfo\r\n---> 46 from .inspect import (\r\n 47 get_dataset_config_info,\r\n 48 get_dataset_config_names,\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\inspect.py in \r\n 28 from .download.streaming_download_manager import StreamingDownloadManager\r\n 29 from .info import DatasetInfo\r\n---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory\r\n 31 from .utils.file_utils import relative_to_absolute_path\r\n 32 from .utils.logging import get_logger\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\load.py in \r\n 53 from .iterable_dataset import IterableDataset\r\n 54 from .metric import Metric\r\n---> 55 from .packaged_modules import (\r\n 56 _EXTENSION_TO_MODULE,\r\n 57 _MODULE_SUPPORTS_METADATA,\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\packaged_modules\\__init__.py in \r\n 4 from typing import List\r\n 5 \r\n----> 6 from .csv import csv\r\n 7 from .imagefolder import imagefolder\r\n 8 from .json import json\r\n\r\n~\\anaconda3\\lib\\site-packages\\datasets\\packaged_modules\\csv\\csv.py in \r\n 13 \r\n 14 \r\n---> 15 logger = datasets.utils.logging.get_logger(__name__)\r\n 16 \r\n 17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = [\"names\", \"prefix\"]\r\n\r\nAttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)\r\n\r\n## Environment info\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Windows-10-10.0.22000-SP0\r\n- Python version: 3.8.8\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.2.4\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4906\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4904","id":1353002837,"node_id":"PR_kwDODunzps4959Ad","number":4904,"title":"[LibriSpeech] Fix dev split local_extracted_archive for 'all' config","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","This PR fixes a bug introduced in:\r\n- #4184"],"created_at":1661594697000,"updated_at":1661853981000,"closed_at":1661853805000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2e7142a3c6500b560da45e8d5128e320a09fcbd4\/datasets\/librispeech_asr\/librispeech_asr.py#L60-L61\r\n\r\nThese keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`.\r\n\r\nHowever, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2e7142a3c6500b560da45e8d5128e320a09fcbd4\/datasets\/librispeech_asr\/librispeech_asr.py#L212\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2e7142a3c6500b560da45e8d5128e320a09fcbd4\/datasets\/librispeech_asr\/librispeech_asr.py#L219\r\n\r\nThe consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`.\r\n\r\nWhen defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2e7142a3c6500b560da45e8d5128e320a09fcbd4\/datasets\/librispeech_asr\/librispeech_asr.py#L259-L263\r\n\r\nThus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`).\r\n\r\nThis PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4904\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4904","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4904","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4904.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4904.patch","merged_at":1661853805000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4903","id":1352539075,"node_id":"PR_kwDODunzps494aud","number":4903,"title":"Fix CI reporting","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661534190000,"updated_at":1661536173000,"closed_at":1661536019000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.\r\n\r\nThis PR fixes a regression introduced by:\r\n- #4845\r\n\r\nThis introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4903\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4903","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4903","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4903.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4903.patch","merged_at":1661536019000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4902","id":1352469196,"node_id":"I_kwDODunzps5QnQrM","number":4902,"title":"Name the default config `default`","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1661530582000,"updated_at":1661530598000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Currently, if a dataset has no configuration, a default configuration is created from the dataset name.\r\n\r\nFor example, for a dataset loaded from the hub repository, such as https:\/\/huggingface.co\/datasets\/user\/dataset (repo id is `user\/dataset`), the default configuration will be `user--dataset`.\r\n\r\nIt might be easier to handle to set it to `default`, or another reserved word.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":1},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4902\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4901","id":1352438915,"node_id":"PR_kwDODunzps494FNX","number":4901,"title":"Raise ManualDownloadError from get_dataset_config_info","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661528756000,"updated_at":1661856141000,"closed_at":1661856004000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download.\r\n\r\nRelated to:\r\n- #4898\r\n\r\nCC: @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4901\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4901","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4901","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4901.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4901.patch","merged_at":1661856004000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4900","id":1352405855,"node_id":"I_kwDODunzps5QnBNf","number":4900,"title":"Dataset Viewer issue for asaxena1990\/Dummy_dataset","user":{"login":"ankurcl","id":56627657,"node_id":"MDQ6VXNlcjU2NjI3NjU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56627657?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ankurcl","html_url":"https:\/\/github.com\/ankurcl","followers_url":"https:\/\/api.github.com\/users\/ankurcl\/followers","following_url":"https:\/\/api.github.com\/users\/ankurcl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ankurcl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ankurcl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ankurcl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ankurcl\/orgs","repos_url":"https:\/\/api.github.com\/users\/ankurcl\/repos","events_url":"https:\/\/api.github.com\/users\/ankurcl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ankurcl\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Seems to be linked to the use of the undocumented `_resolve_features` method in the dataset viewer backend:\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"asaxena1990\/Dummy_dataset\", name=\"asaxena1990--Dummy_dataset\", split=\"train\", streaming=True)\r\nUsing custom data configuration asaxena1990--Dummy_dataset-4a704ed7e5627563\r\n>>> dataset._resolve_features()\r\nFailed to read file 'https:\/\/huggingface.co\/datasets\/asaxena1990\/Dummy_dataset\/resolve\/06885879a8bdd767d2d27695484fc6c83244617a\/dummy_dataset_train.json' with error : JSON parse error: Column() changed from object to array in row 0\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/json\/json.py\", line 109, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow\/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow\/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 1261, in _resolve_features\r\n features = _infer_features_from_batch(self._head())\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 686, in _head\r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 686, in \r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 708, in _iter\r\n yield from ex_iterable\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 112, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 651, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/json\/json.py\", line 137, in _generate_tables\r\n f\"This JSON file contain the following fields: {str(list(dataset.keys()))}. \"\r\nAttributeError: 'list' object has no attribute 'keys'\r\n```\r\n\r\nPinging @huggingface\/datasets","Hi ! JSON files containing a list of object are not supported yet, you can use JSON Lines files instead in the meantime\r\n```json\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n...\r\n```"],"created_at":1661526944000,"updated_at":1661532491000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4900\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4899","id":1352031286,"node_id":"PR_kwDODunzps492uTO","number":4899,"title":"Re-add code and und language tags","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661507337000,"updated_at":1661509638000,"closed_at":1661509460000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR fixes the removal of 2 language tags done by:\r\n- #4882\r\n\r\nThe tags are:\r\n- \"code\": this is not a IANA tag but needed\r\n- \"und\": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af\r\n - used in \"mc4\" and \"udhr\" datasets","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4899\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4899","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4899","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4899.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4899.patch","merged_at":1661509460000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4898","id":1351851254,"node_id":"I_kwDODunzps5Qk5z2","number":4898,"title":"Dataset Viewer issue for timit_asr","user":{"login":"InayatUllah932","id":91126978,"node_id":"MDQ6VXNlcjkxMTI2OTc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/91126978?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/InayatUllah932","html_url":"https:\/\/github.com\/InayatUllah932","followers_url":"https:\/\/api.github.com\/users\/InayatUllah932\/followers","following_url":"https:\/\/api.github.com\/users\/InayatUllah932\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/InayatUllah932\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/InayatUllah932\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/InayatUllah932\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/InayatUllah932\/orgs","repos_url":"https:\/\/api.github.com\/users\/InayatUllah932\/repos","events_url":"https:\/\/api.github.com\/users\/InayatUllah932\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/InayatUllah932\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB\/s]\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/timit_asr\/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2\/timit_asr.py\", line 117, in _split_generators\r\n data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.9.6\/lib\/python3.9\/posixpath.py\", line 231, in expanduser\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\ncc @huggingface\/datasets ","Due to license restriction, this dataset needs manual downloading of the original data.\r\n\r\nThis information is in the dataset card: https:\/\/huggingface.co\/datasets\/timit_asr\r\n> The dataset needs to be downloaded manually from https:\/\/catalog.ldc.upenn.edu\/LDC93S1","Maybe a better error message for datasets that need manual downloading? @severo \r\n\r\nMaybe we can raise a specific excpetion as done from `load_dataset`...","Yes, ideally something like https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/builder.py#L81\r\n"],"created_at":1661497925000,"updated_at":1661526229000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4898\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4897","id":1351784727,"node_id":"I_kwDODunzps5QkpkX","number":4897,"title":"datasets generate large arrow file","user":{"login":"osayes","id":18533904,"node_id":"MDQ6VXNlcjE4NTMzOTA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18533904?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osayes","html_url":"https:\/\/github.com\/osayes","followers_url":"https:\/\/api.github.com\/users\/osayes\/followers","following_url":"https:\/\/api.github.com\/users\/osayes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osayes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osayes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osayes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osayes\/orgs","repos_url":"https:\/\/api.github.com\/users\/osayes\/repos","events_url":"https:\/\/api.github.com\/users\/osayes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osayes\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?","@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 times in size. \r\nI think maybe it doesn' matter, it's just cache after all."],"created_at":1661493076000,"updated_at":1663477672000,"closed_at":1663477672000,"author_association":"NONE","active_lock_reason":null,"body":"Checking the large file in disk, and found the large cache file in the cifar10 data directory:\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/18533904\/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png)\r\n\r\nAs we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4897\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4896","id":1351180409,"node_id":"PR_kwDODunzps49z4fU","number":4896,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661445703000,"updated_at":1661489070000,"closed_at":1661488908000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n- #4891","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4896\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4896","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4896","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4896.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4896.patch","merged_at":1661488908000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4895","id":1350798527,"node_id":"I_kwDODunzps5Qg4y_","number":4895,"title":"load_dataset method returns Unknown split \"validation\" even if this dir exists","user":{"login":"SamSamhuns","id":13418507,"node_id":"MDQ6VXNlcjEzNDE4NTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13418507?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SamSamhuns","html_url":"https:\/\/github.com\/SamSamhuns","followers_url":"https:\/\/api.github.com\/users\/SamSamhuns\/followers","following_url":"https:\/\/api.github.com\/users\/SamSamhuns\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SamSamhuns\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SamSamhuns\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SamSamhuns\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SamSamhuns\/orgs","repos_url":"https:\/\/api.github.com\/users\/SamSamhuns\/repos","events_url":"https:\/\/api.github.com\/users\/SamSamhuns\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SamSamhuns\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n","@SamSamhuns could you please try to load it with the current main-branch version of `datasets`? I suppose the problem is that it tries to get splits names from filenames in this case, ignoring directories names, but `val` wasn't in keywords at that time, but it was fixed recently in this PR https:\/\/github.com\/huggingface\/datasets\/pull\/4844. ","I have a similar problem.\r\nWhen I try to create `data_infos.json` using `datasets-cli test Peter.py --save_infos --all_configs` I get an error:\r\n`ValueError: Unknown split \"test\". Should be one of ['train'].`\r\n\r\nThe `data_infos.json` is created perfectly fine when I use only one split - `datasets.Split.TRAIN`\r\n\r\n@polinaeterna Could you help here please?\r\n\r\nYou can find the code here: https:\/\/huggingface.co\/datasets\/sberbank-ai\/Peter\/tree\/add_splits (add_splits branch)","@skalinin It seems the `dataset_infos.json` of your dataset is missing the info on the test split (and `datasets-cli` doesn't ignore the cached infos at the moment, which is a known bug), so your issue is not related to this one. I think you can fix your issue by deleting all the cached `dataset_infos.json` (in the local repo and in `~\/.cache\/huggingface\/modules`) before running the `datasets-cli test` command. Let us know if that doesn't help, and I can try to generate it myself.","This code indeed behaves as expected on `main`. But suppose the `val_234.png` is renamed to some other value not containing one of [these](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L31) keywords, in that case, this issue becomes relevant again because the real cause of it is the order in which we check the predefined split patterns to assign data files to each split - first we assign data files based on filenames, and only if this fails meaning not a single split found (`val` is not recognized here in the older versions of `datasets`, which results in an empty `validation` split), do we assign based on directory names.\r\n\r\n@polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L78-L79) of the patterns if `data_dir` is specified (or if `load_dataset(data_dir)` is called)? ","> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nyes that makes sense !","Looks like the `val\/validation` dir name issue is fixed with the current main-branch version of the `datasets` repository. \r\n\r\n> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nI agree with this as well. I would expect higher precedence to the directory name over the file name. Right now if I place a single file named `train_00001.jpg` under the `validation` directory, `load_dataset` cannot find the validation split.","Thanks for the reply\r\n\r\nI've created a separate [issue](https:\/\/github.com\/huggingface\/datasets\/issues\/4982#issue-1375604693) for my problem.","> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https:\/\/github.com\/huggingface\/datasets\/blob\/38c8c725f3996ff1ff03f6fd461aa6d645321034\/src\/datasets\/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nSounds good to me! opened a PR: https:\/\/github.com\/huggingface\/datasets\/pull\/4985"],"created_at":1661429460000,"updated_at":1663327271000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nThe `datasets.load_dataset` returns a `ValueError: Unknown split \"validation\". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split=\"validation\")` even if the `validation` sub-directory exists in the local data path.\r\n\r\nThe data directories are as follows and attached to this issue:\r\n```\r\ntest_data1\r\n |_ train\r\n |_ 1012.png\r\n |_ metadata.jsonl\r\n ...\r\n |_ test\r\n ...\r\n |_ validation\r\n |_ 234.png\r\n |_ metadata.jsonl\r\n ...\r\ntest_data2\r\n |_ train\r\n |_ train_1012.png\r\n |_ metadata.jsonl\r\n ...\r\n |_ test\r\n ...\r\n |_ validation\r\n |_ val_234.png\r\n |_ metadata.jsonl\r\n ...\r\n```\r\n\r\nThey contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.\r\n`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`\r\n\r\nI actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\ndatasets.logging.set_verbosity_error()\r\nfrom datasets import load_dataset, get_dataset_split_names\r\n\r\n\r\n# the following only finds train, validation and test splits correctly\r\npath = \".\/test_data1\"\r\nprint(\"######################\", get_dataset_split_names(path), \"######################\")\r\n\r\ndataset_list = []\r\nfor spt in [\"train\", \"test\", \"validation\"]:\r\n dataset = load_dataset(path, split=spt)\r\n dataset_list.append(dataset)\r\n\r\n\r\n# the following only finds train and test splits\r\npath = \".\/test_data2\"\r\nprint(\"######################\", get_dataset_split_names(path), \"######################\")\r\n\r\ndataset_list = []\r\nfor spt in [\"train\", \"test\", \"validation\"]:\r\n dataset = load_dataset(path, split=spt)\r\n dataset_list.append(dataset)\r\n```\r\n\r\n\r\n## Expected results\r\n```\r\n###################### ['train', 'test', 'validation'] ######################\r\n###################### ['train', 'test', 'validation'] ######################\r\n```\r\n\r\n## Actual results\r\n```\r\nTraceback (most recent call last):\r\n File \"test_data_loader.py\", line 11, in \r\n\r\n dataset = load_dataset(path, split=spt)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1758, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 893, in as_dataset\r\n datasets = map_nested(\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 924, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 993, in _as_dataset\r\n dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 211, in read\r\n files = self.get_file_instructions(name, instructions, split_infos)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 184, in get_file_instructions\r\n file_instructions = make_file_instructions(\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 107, in make_file_instructions\r\n absolute_instructions = instruction.to_absolute(name2len)\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 616, in to_absolute\r\n return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 616, in \r\n return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]\r\n File \"\/home\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 433, in _rel_to_abs_instr\r\n raise ValueError(f'Unknown split \"{split}\". Should be one of {list(name2len)}.')\r\nValueError: Unknown split \"validation\". Should be one of ['train', 'test'].\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform: Linux Ubuntu 18.04\r\n- Python version: 3.8.12\r\n- PyArrow version: 9.0.0\r\n\r\nData files\r\n\r\n[test_data1.zip](https:\/\/github.com\/huggingface\/datasets\/files\/9424463\/test_data1.zip)\r\n[test_data2.zip](https:\/\/github.com\/huggingface\/datasets\/files\/9424468\/test_data2.zip)\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4895\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4894","id":1350667270,"node_id":"PR_kwDODunzps49yIvr","number":4894,"title":"Add citation information to makhzan dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661422600000,"updated_at":1661840514000,"closed_at":1661433581000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:\r\n- https:\/\/github.com\/zeerakahmed\/makhzan\/issues\/43","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4894\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4894","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4894","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4894.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4894.patch","merged_at":1661433581000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4893","id":1350655674,"node_id":"I_kwDODunzps5QgV66","number":4893,"title":"Oversampling strategy for iterable datasets in `interleave_datasets`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":3761482852,"node_id":"LA_kwDODunzps7gM6xk","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20second%20issue","name":"good second issue","color":"BDE59C","default":false,"description":"Issues a bit more difficult than \"Good First\" issues"}],"state":"open","locked":false,"assignee":{"login":"ylacombe","id":52246514,"node_id":"MDQ6VXNlcjUyMjQ2NTE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52246514?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ylacombe","html_url":"https:\/\/github.com\/ylacombe","followers_url":"https:\/\/api.github.com\/users\/ylacombe\/followers","following_url":"https:\/\/api.github.com\/users\/ylacombe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ylacombe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ylacombe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ylacombe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ylacombe\/orgs","repos_url":"https:\/\/api.github.com\/users\/ylacombe\/repos","events_url":"https:\/\/api.github.com\/users\/ylacombe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ylacombe\/received_events","type":"User","site_admin":false},"assignees":[{"login":"ylacombe","id":52246514,"node_id":"MDQ6VXNlcjUyMjQ2NTE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52246514?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ylacombe","html_url":"https:\/\/github.com\/ylacombe","followers_url":"https:\/\/api.github.com\/users\/ylacombe\/followers","following_url":"https:\/\/api.github.com\/users\/ylacombe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ylacombe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ylacombe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ylacombe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ylacombe\/orgs","repos_url":"https:\/\/api.github.com\/users\/ylacombe\/repos","events_url":"https:\/\/api.github.com\/users\/ylacombe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ylacombe\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n","Great @ylacombe thanks ! I'm assigning you this issue","Hi @ylacombe :) Is there anything I can do to help ? Feel free to ping me if you have any question :)","Hi @lhoestq,\r\n\r\nI actually have already wrote the code last time [on this commit](https:\/\/github.com\/ylacombe\/datasets\/commit\/84769db97facc78a33ec53f7b1b395951e1804df) but I still have to change the docs and write some tests though. I'm working on it.\r\n\r\nHowever, I still your advice on one matter. \r\nIn #4831, when using a `Dataset` list with probabilities, I had change the original behavior so that it stops as soon as one or all datasets are out of samples. By nature, this behavior can't be applied with an `IterableDataset` because one only knows an iterable dataset is out of sample when receiving a StopIteration error after calling the iterator once again. \r\nTo sum up, as it is right know, the behavior is not consistent with an `IterableDataset` list or a `Dataset` list, when using probabilities.\r\nTo be honest, I think that the current behavior with a `Dataset` list is desirable and avoid having too many samples, so I would recommand keeping that as it is, but I can understand the desire to have the same behavior for both classes. \r\nWhat do you think ? Please let me know if you need more details.\r\n\r\n\r\nEDIT:\r\nHere is an example:\r\n```\r\n>>> from tests.test_iterable_dataset import *\r\n>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\n>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\n>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n>>> [x[\"a\"] for x in dataset]\r\n[10, 0, 11, 1, 2, 20, 12, 13]\r\n>>> from tests.test_arrow_dataset import *\r\n>>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n>>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n>>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n>>> interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)[\"a\"]\r\n[10, 0, 11, 1, 2]\r\n[10, 0, 11, 1, 2]\r\n```\r\n ","Hi ! Awesome :) \r\n\r\nMaybe you can pre-load the next sample to know if the dataset is empty or not ?\r\nThis way it should be possible to have the same behavior for `IterableDataset`"],"created_at":1661422015000,"updated_at":1663069839000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"In https:\/\/github.com\/huggingface\/datasets\/pull\/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.\r\n\r\nIt would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy\r\n\r\n```python\r\n>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable\r\n>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {\"a\": i}) for i in [0, 1, 2]], {}))\r\n>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {\"a\": i}) for i in [10, 11, 12, 13]], {}))\r\n>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {\"a\": i}) for i in [20, 21, 22, 23, 24]], {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3]) # is supported\r\n>>> [x[\"a\"] for x in dataset]\r\n[0, 10, 20, 1, 11, 21, 2, 12, 22]\r\n>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy=\"all_exhausted\") # is not supported yet\r\n>>> [x[\"a\"] for x in dataset]\r\n[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]\r\n```\r\n\r\nThis can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`\r\n\r\nI would be happy to share some guidance if anyone would like to give it a shot :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4893\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4892","id":1350636499,"node_id":"PR_kwDODunzps49yCD3","number":4892,"title":"Add citation to ro_sts and ro_sts_parallel datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4892). All of your documentation changes will be reflected on that endpoint."],"created_at":1661421066000,"updated_at":1661424596000,"closed_at":1661424596000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information:\r\n- https:\/\/github.com\/dumitrescustefan\/RO-STS\/issues\/4","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4892\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4892","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4892","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4892.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4892.patch","merged_at":1661424596000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4891","id":1350589813,"node_id":"PR_kwDODunzps49x382","number":4891,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1661418857000,"updated_at":1661435015000,"closed_at":1661435014000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.\r\n\r\nRelated to:\r\n- #4833\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4891\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4891","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4891","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4891.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4891.patch","merged_at":1661435014000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4890","id":1350578029,"node_id":"PR_kwDODunzps49x1YC","number":4890,"title":"add Dataset.from_list","user":{"login":"sanderland","id":48946947,"node_id":"MDQ6VXNlcjQ4OTQ2OTQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48946947?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanderland","html_url":"https:\/\/github.com\/sanderland","followers_url":"https:\/\/api.github.com\/users\/sanderland\/followers","following_url":"https:\/\/api.github.com\/users\/sanderland\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanderland\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanderland\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanderland\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanderland\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanderland\/repos","events_url":"https:\/\/api.github.com\/users\/sanderland\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanderland\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@albertvillanova it seems tests fail on pyarrow 6, perhaps from_pylist is a v7 method? How do you usually handle these version differences?\r\nAdded something that at least works"],"created_at":1661418358000,"updated_at":1662114179000,"closed_at":1662114033000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"As discussed in #4885 \r\n\r\nI initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict. \r\nHowever, it seems the constructor takes care of filling info when it is empty.\r\n```\r\nif info.features is None:\r\n info.features = Features(\r\n {\r\n col: generate_from_arrow_type(coldata.type)\r\n for col, coldata in zip(pa_table.column_names, pa_table.columns)\r\n }\r\n )\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4890\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4890","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4890","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4890.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4890.patch","merged_at":1662114033000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4889","id":1349758525,"node_id":"I_kwDODunzps5Qc649","number":4889,"title":"torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.","torchaudio did a change in [0.12](https:\/\/github.com\/pytorch\/audio\/releases\/tag\/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 decoding is now handled by FFmpeg in sox_io backend. (https:\/\/github.com\/pytorch\/audio\/pull\/2419, https:\/\/github.com\/pytorch\/audio\/pull\/2428)\r\n> - FFmpeg is now used as fallback in sox_io backend, and now MP3 decoding is handled by FFmpeg. To load MP3 audio with torchaudio.load, please install a compatible version of FFmpeg (Version 4 when using an official binary distribution).\r\n> - Note that, whereas the previous MP3 decoding scheme pads the output audio, the new scheme does not. As a consequence, the new version returns shorter audio tensors."],"created_at":1661360083000,"updated_at":1661361068000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nWhen loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https:\/\/github.com\/huggingface\/transformers\/pull\/18749\r\n\r\n## Steps to reproduce the bug\r\n\r\nIf you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers.\r\n\r\n```python\r\n#!\/usr\/bin\/env python3\r\nfrom datasets import load_dataset\r\nimport datasets\r\nimport numpy as np\r\nimport torch\r\nimport torchaudio\r\nprint(\"torch vesion\", torch.__version__)\r\nprint(\"torchaudio vesion\", torchaudio.__version__)\r\n\r\nsave_audio = True\r\nload_audios = False\r\n\r\nif save_audio:\r\n ds = load_dataset(\"common_voice\", \"en\", split=\"train\", streaming=True)\r\n ds = ds.cast_column(\"audio\", datasets.Audio(sampling_rate=16_000))\r\n ds_iter = iter(ds)\r\n sample = next(ds_iter)\r\n\r\n np.save(f\"audio_sample_{torch.__version__}\", sample[\"audio\"][\"array\"])\r\n print(sample[\"audio\"][\"array\"])\r\n\r\nif load_audios:\r\n array_torch_11 = np.load(\"\/home\/patrick\/audio_sample_1.11.0+cu102.npy\")\r\n print(\"Array 11 Shape\", array_torch_11.shape)\r\n print(\"Array 11 abs sum\", np.sum(np.abs(array_torch_11)))\r\n array_torch_12 = np.load(\"\/home\/patrick\/audio_sample_1.12.1+cu102.npy\")\r\n print(\"Array 12 Shape\", array_torch_12.shape)\r\n print(\"Array 12 abs sum\", np.sum(np.abs(array_torch_12)))\r\n```\r\n\r\nHaving saved the tensors the print output yields:\r\n\r\n```\r\ntorch vesion 1.12.1+cu102\r\ntorchaudio vesion 0.12.1+cu102\r\nArray 11 Shape (122880,)\r\nArray 11 abs sum 1396.4988\r\nArray 12 Shape (123264,)\r\nArray 12 abs sum 1396.5193\r\n```\r\n\r\n## Expected results\r\ntorchaudio 11.0 and 12.1 should yield same results.\r\n\r\n## Actual results\r\nSee above.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.1.1.dev0\r\n- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4889\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4888","id":1349447521,"node_id":"I_kwDODunzps5Qbu9h","number":4888,"title":"Dataset Viewer issue for subjqa","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it.","Fixed \r\n\r\nhttps:\/\/huggingface.co\/datasets\/subjqa\r\n\r\n\"Capture\r\n"],"created_at":1661347580000,"updated_at":1662625422000,"closed_at":1662625422000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/subjqa\n\n### Description\n\nGetting the following error for this dataset:\r\n\r\n```\r\nStatus code: 500\r\nException: Status500Error\r\nMessage: 2 or more items returned, instead of 1\r\n```\r\n\r\nNot sure what's causing it though \ud83e\udd14 \n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4888\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4887","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4887\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4887\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4887\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4887","id":1349426693,"node_id":"PR_kwDODunzps49t_PM","number":4887,"title":"Add \"cc-by-nc-sa-2.0\" to list of licenses ","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Sorry for the issue @albertvillanova! I think it's now fixed! :heart: "],"created_at":1661346709000,"updated_at":1661509892000,"closed_at":1661509760000,"author_association":"MEMBER","active_lock_reason":null,"body":"Datasets side of https:\/\/github.com\/huggingface\/hub-docs\/pull\/285","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4887\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4887\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4887","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4887","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4887.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4887.patch","merged_at":1661509760000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4886","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4886\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4886\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4886\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4886","id":1349285569,"node_id":"I_kwDODunzps5QbHbB","number":4886,"title":"Loading huggan\/CelebA-HQ throws pyarrow.lib.ArrowInvalid","user":{"login":"JeanKaddour","id":11850255,"node_id":"MDQ6VXNlcjExODUwMjU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11850255?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JeanKaddour","html_url":"https:\/\/github.com\/JeanKaddour","followers_url":"https:\/\/api.github.com\/users\/JeanKaddour\/followers","following_url":"https:\/\/api.github.com\/users\/JeanKaddour\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JeanKaddour\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JeanKaddour\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JeanKaddour\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JeanKaddour\/orgs","repos_url":"https:\/\/api.github.com\/users\/JeanKaddour\/repos","events_url":"https:\/\/api.github.com\/users\/JeanKaddour\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JeanKaddour\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! IIRC one of the files in this dataset is corrupted due to https:\/\/github.com\/huggingface\/datasets\/pull\/4081 (fixed now).\r\n\r\n@NielsRogge Could you please re-generate and re-push this dataset (or I can do it if you share the generation script)?"],"created_at":1661340261000,"updated_at":1662654544000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nLoading huggan\/CelebA-HQ throws pyarrow.lib.ArrowInvalid\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('huggan\/CelebA-HQ')\r\n```\r\n\r\n## Expected results\r\nSee https:\/\/colab.research.google.com\/drive\/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd\r\n\r\n## Actual results\r\n```\r\n File \"\/home\/jean\/projects\/cold_diffusion\/celebA.py\", line 4, in \r\n dataset = load_dataset('huggan\/CelebA-HQ')\r\n File \"\/home\/jean\/miniconda3\/envs\/seq\/lib\/python3.10\/site-packages\/datasets\/load.py\", line 1793, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/jean\/miniconda3\/envs\/seq\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/jean\/miniconda3\/envs\/seq\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 793, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/jean\/miniconda3\/envs\/seq\/lib\/python3.10\/site-packages\/datasets\/builder.py\", line 1274, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"\/home\/jean\/miniconda3\/envs\/seq\/lib\/python3.10\/site-packages\/tqdm\/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/jean\/miniconda3\/envs\/seq\/lib\/python3.10\/site-packages\/datasets\/packaged_modules\/parquet\/parquet.py\", line 67, in _generate_tables\r\n parquet_file = pq.ParquetFile(f)\r\n File \"\/home\/jean\/miniconda3\/envs\/seq\/lib\/python3.10\/site-packages\/pyarrow\/parquet\/__init__.py\", line 286, in __init__\r\n self.reader.open(\r\n File \"pyarrow\/_parquet.pyx\", line 1227, in pyarrow._parquet.ParquetReader.open\r\n File \"pyarrow\/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: datasets-2.4.1.dev0\r\n- Platform: Ubuntu 18.04\r\n- Python version: 3.10\r\n- PyArrow version: pyarrow 9.0.0\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4886\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4886\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4885","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4885\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4885\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4885\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4885","id":1349181448,"node_id":"I_kwDODunzps5QauAI","number":4885,"title":"Create dataset from list of dicts","user":{"login":"sanderland","id":48946947,"node_id":"MDQ6VXNlcjQ4OTQ2OTQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48946947?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanderland","html_url":"https:\/\/github.com\/sanderland","followers_url":"https:\/\/api.github.com\/users\/sanderland\/followers","following_url":"https:\/\/api.github.com\/users\/sanderland\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanderland\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanderland\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanderland\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanderland\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanderland\/repos","events_url":"https:\/\/api.github.com\/users\/sanderland\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanderland\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @sanderland, thanks for your enhancement proposal.\r\n\r\nI agree with you that this would be useful.\r\n\r\nPlease note that under the hood, we use PyArrow tables as backend:\r\n- The implementation of `Dataset.from_dict` uses the PyArrow `Table.from_pydict`\r\n\r\nTherefore, I would suggest:\r\n- Implementing `Dataset.from_list` using the PyArrow `Table.from_pylist`\r\n\r\nWhat do you think?\r\nLet's see if other people have other suggestions...","Thanks for the quick and positive reply @albertvillanova! \r\n`from_list` seems sensible. Have opened a PR so we can discuss details there.","Resolved via #4890."],"created_at":1661335284000,"updated_at":1662652972000,"closed_at":1662652972000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I often find myself with data from a variety of sources, and a list of dicts is very common among these.\r\nHowever, converting this to a Dataset is a little awkward, requiring either\r\n\r\n```Dataset.from_pandas(pd.DataFrame(formatted_training_data))```\r\nWhich can error out on some more exotic values as 2-d arrays for reasons that are not entirely clear\r\n> ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column labels with type object')\r\n\r\nAlternatively:\r\n```Dataset.from_dict({k: [s[k] for s in formatted_training_data] for k in formatted_training_data[0].keys()})```\r\nWhich works, but is a little ugly.\r\n\r\n**Describe the solution you'd like**\r\nEither `.from_dict` accepting a list of dicts, or a `.from_records` function accepting such.\r\n\r\nI am happy to PR this, just wanted to check you are happy to accept this I haven't missed something obvious, and which of the solutions would be prefered.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4885\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4885\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4884","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4884\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4884\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4884\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4884","id":1349105946,"node_id":"PR_kwDODunzps49s6Aj","number":4884,"title":"Fix documentation card of math_qa dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4884). All of your documentation changes will be reflected on that endpoint."],"created_at":1661331656000,"updated_at":1661340797000,"closed_at":1661340796000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix documentation card of math_qa dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4884\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4884\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4884","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4884","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4884.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4884.patch","merged_at":1661340796000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4883","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4883\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4883\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4883\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4883","id":1349083235,"node_id":"I_kwDODunzps5QaWBj","number":4883,"title":"With dataloader RSS memory consumed by HF datasets monotonically increases","user":{"login":"apsdehal","id":3616806,"node_id":"MDQ6VXNlcjM2MTY4MDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616806?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/apsdehal","html_url":"https:\/\/github.com\/apsdehal","followers_url":"https:\/\/api.github.com\/users\/apsdehal\/followers","following_url":"https:\/\/api.github.com\/users\/apsdehal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/apsdehal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/apsdehal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/apsdehal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/apsdehal\/orgs","repos_url":"https:\/\/api.github.com\/users\/apsdehal\/repos","events_url":"https:\/\/api.github.com\/users\/apsdehal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/apsdehal\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Are you sure there is a leak? How can I see it? You shared the script but not the output which you believe should indicate a leak.\r\n\r\nI modified your reproduction script to print only once per try as your original was printing too much info and you absolutely must add `gc.collect()` when doing any memory measurements, since python's GC is scheduled so you might be measuring the wrong thing. This gives us:\r\n\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nfrom transformers import BertTokenizer\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nBATCH_SIZE = 32\r\nNUM_TRIES = 100\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndef transform(x):\r\n x.update(tokenizer(x[\"text\"], return_tensors=\"pt\", max_length=64, padding=\"max_length\", truncation=True))\r\n x.pop(\"text\")\r\n x.pop(\"label\")\r\n return x\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\ndataset.set_transform(transform)\r\ntrain_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\n\r\ncount = 0\r\nwhile count < NUM_TRIES:\r\n for idx, batch in enumerate(train_loader): pass\r\n gc.collect()\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\n print(count, mem_after - mem_before)\r\n count += 1\r\n```\r\n\r\nNow running it:\r\n\r\n```\r\n$ python dl-leak.py \r\nReusing dataset imdb (\/home\/stas\/.cache\/huggingface\/datasets\/imdb\/plain_text\/1.0.0\/2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1)\r\n0 4.43359375\r\n1 4.4453125\r\n2 4.44921875\r\n3 4.44921875\r\n4 4.4609375\r\n5 4.46484375\r\n6 4.46484375\r\n7 4.46484375\r\n8 4.46484375\r\n9 4.46484375\r\n10 4.46484375\r\n11 4.46484375\r\n12 4.46484375\r\n13 4.46484375\r\n14 4.46484375\r\n15 4.46484375\r\n16 4.46484375\r\n```\r\n\r\nIt's normal that at the beginning there is a small growth in memory usage, but after 5 cycles it gets steady.","Unless of course you're referring the memory growth during the first try. Is that what you're referring to? And since your ds is small it's hard to see the growth - could it be just because some records are longer and it needs to allocate more memory for those?\r\n\r\nThough while experimenting with this I have observed a peculiar thing, if I concatenate 2 datasets, I don't see any growth at all. But that's probably because the program allocated additional peak RSS memory to concatenate and then is re-using the memory\r\n\r\nI basically tried to see if I make the dataset much longer, I'd expect not to see any memory growth once the 780 records of the imdb ds have been processed once.","It is hard to say if it is directly reproducible in this setup. Maybe it is specific to the images stored in the CM4 case which cause a memory leak. I am still running your script and seeing if I can reproduce that particular leak in this case.","I was able to reproduce the leak with:\r\n\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nfrom datasets import load_from_disk\r\nimport time\r\n\r\nDATASET_PATH = \"\/hf\/m4-master\/data\/cm4\/cm4-10000-v0.1\"\r\n\r\ndataset = load_from_disk(DATASET_PATH)\r\n\r\n# truncate to a tiny dataset\r\ndataset = dataset.select(range(1000))\r\n\r\nprint(f\"dataset: {len(dataset)} records\")\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\nfor idx, rec in enumerate(dataset):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\nYou need to adjust the DATASET_PATH record.\r\n\r\nwhich you get from\r\n\r\n```\r\ngsutil -m cp \"gs:\/\/hf-science-m4\/cm4\/cm4-10000-v0.1\/dataset.arrow\" \"gs:\/\/hf-science-m4\/cm4\/cm4-10000-v0.1\/dataset_info.json\" \"gs:\/\/hf-science-m4\/cm4\/cm4-10000-v0.1\/state.json\" .\r\n```\r\n(I assume the hf folks have the perms) - it's a smallish dataset (10k)\r\n\r\nthen you run:\r\n```\r\n$ python ds.py\r\ndataset: 1000 records\r\n 0 1.0156MB\r\n 100 126.3906MB\r\n 200 142.8906MB\r\n 300 168.5586MB\r\n 400 218.3867MB\r\n 500 230.7070MB\r\n 600 238.9570MB\r\n 700 263.3789MB\r\n 800 288.1289MB\r\n 900 300.5039MB\r\n```\r\n\r\nyou should be able to see the leak ","This issue has nothing to do with `PIL`'s decoder. I removed it and the problem is still there.\r\n\r\nI then traced this leak to this single call: `pa_table.to_pydict()` here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/08a7b389cdd6fb49264a72aa8ccfc49a233494b6\/src\/datasets\/formatting\/formatting.py#L138-L140\r\n\r\nI can make it leak much faster by modifying that code to repeat `pa_table.to_pydict()` many times in a row. It shouldn't have that impact:\r\n\r\n```\r\nclass PythonArrowExtractor(BaseArrowExtractor[dict, list, dict]):\r\n def extract_row(self, pa_table: pa.Table) -> dict:\r\n x = [pa_table.to_pydict() for x in range(200)]\r\n return _unnest(pa_table.to_pydict())\r\n```\r\n\r\n@lhoestq - do you know what might be happening inside `pa_table.to_pydict()`, as this is in the `pyarrow` domain. Perhaps you know someone to tag from that project?\r\n\r\nProbably next need to remove `datasets` from the equation and make a reproducible case with just `pyarrow` directly.\r\n\r\nThe problem already happens with `pyarrow==6.0.0` or later (minimum for current `datasets`)\r\n\r\nI'm also trying to dig in with `objgraph` to see if there are any circular references which prevent objects from being freed, but no luck there so far. And I'm pretty sure `to_pydict` is not a python code, so the problem is likely to happen somewhere outside of python's GC.","This appears to be the same issue I think: https:\/\/github.com\/huggingface\/datasets\/issues\/4528\r\nI dug into the repro code there and it's the same behavior with the same leak, but it's a pure nlp dataset and thus much faster to work with. \r\n","I went all the way back to `pyarrow==1.0.0` and `datasets==1.12.0` and the problem is still there. How is it even possible that it wasn't noticed all this time. \r\n\r\nCould it be that the leak is in some 3rd party component `pyarrow` relies on? as while downgrading I have only downgraded the above 2 packages.\r\n","Also found this warning \r\n\r\n> Be careful: if you don't pass the ArrowArray struct to a consumer,\r\n> array memory will leak. This is a low-level function intended for\r\n> expert users.\r\n\r\nsee: https:\/\/github.com\/apache\/arrow\/blob\/99b57e84277f24e8ec1ddadbb11ef8b4f43c8c89\/python\/pyarrow\/table.pxi#L2515-L2517\r\n\r\nperhaps something triggers this condition?\r\n\r\nI have no idea if it's related - this is just something that came up during my research.","Does it crash with OOM at some point? If it doesn't, it isn't a leak, just agressive caching or a custom allocator that doesn't like to give memory back (not uncommon). #4528 looks like it hits a steady state.\r\n\r\nI believe the underlying arrow libs use a custom C allocator. Some of those are designed not to give back to OS, but keep heap memory for themselves to re-use (hitting up the OS involves more expensive mutex locks, contention, etc). The greedy behaviour can be undesirable though. There are likely flags to change the allocator behaviour, and one could likely build without any custom allocators (or use a different one).","> Does it crash with OOM at some point?\r\n\r\nIn the original setup where we noticed this problem, it was indeed ending in an OOM","> https:\/\/github.com\/huggingface\/datasets\/issues\/4528 looks like it hits a steady state.\r\n\r\n@rwightman in the plot I shared, the steady state comes from the `time.sleep(100)` I added in the end of the script, to showcase that even the garbage collector couldn't free that allocated memory.\r\n","Could this be related to this discussion about a potential memory leak in pyarrow: https:\/\/issues.apache.org\/jira\/browse\/ARROW-11007 ?\r\n\r\n(Note: I've tried `import pyarrow; pyarrow.jemalloc_set_decay_ms(0)` and the memory leak is still happening on your toy example)","> @lhoestq - do you know what might be happening inside pa_table.to_pydict(), as this is in the pyarrow domain. Perhaps you know someone to tag from that project?\r\n\r\n`to_pydict` calls `to_pylist` on each column (i.e. on each PyArrow Array). Then it iterates on the array and calls `as_py` on each element. The `as_py` implementation depends on the data type. For strings I think it simply gets the buffer that contains the binary string data that is defined in C++\r\n\r\nThe Arrow team is pretty responsive at user@arrow.apache.org if it can help\r\n\r\n> Probably next need to remove datasets from the equation and make a reproducible case with just pyarrow directly.\r\n\r\nThat would be ideal indeed. Would be happy to help on this, can you give me access to the bucket so I can try with your data ?","> That would be ideal indeed. Would be happy to help on this, can you give me access to the bucket so I can try with your data ?\r\n\r\nI added you to the bucket @lhoestq ","It looks like an issue with memory mapping:\r\n- the amount of memory used in the end corresponds to the size of the dataset\r\n- setting `keep_in_memory=True` in `load_from_disk` loads the dataset in RAM, and **doesn't cause any memory leak**","Here is a code to reproduce this issue using only PyArrow and a dummy arrow file:\r\n```python\r\nimport psutil\r\nimport os\r\nimport gc\r\nimport pyarrow as pa\r\nimport time\r\n\r\nARROW_PATH = \"tmp.arrow\"\r\n\r\nif not os.path.exists(ARROW_PATH):\r\n arr = pa.array([b\"a\" * (200 * 1024)] * 1000) # ~200MB\r\n table = pa.table({\"a\": arr})\r\n\r\n with open(ARROW_PATH, \"wb\") as f:\r\n writer = pa.RecordBatchStreamWriter(f, schema=table.schema)\r\n writer.write_table(table)\r\n writer.close()\r\n\r\n\r\ndef memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\r\n memory_mapped_stream = pa.memory_map(filename)\r\n opened_stream = pa.ipc.open_stream(memory_mapped_stream)\r\n pa_table = opened_stream.read_all()\r\n return pa_table\r\n\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr = table[0]\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\nfor idx, x in enumerate(arr):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\nprints\r\n```\r\n 0 0.2500MB\r\n 100 19.8008MB\r\n 200 39.3320MB\r\n 300 58.8633MB\r\n 400 78.3945MB\r\n 500 97.9258MB\r\n 600 117.4570MB\r\n 700 136.9883MB\r\n 800 156.5195MB\r\n 900 176.0508MB\r\n```\r\nNote that this example simply iterates over the `pyarrow.lib.BinaryScalar` objects in the array. Running `.as_py()` is not needed to experience the memory issue.","@lhoestq that does indeed increase in memory, but if you iterate over array again after the first time, or re-open and remap the same file (repeat `table = memory_mapped_arrow_table_from_file(ARROW_PATH)`) before re-iterating, it doesn't move pas 195MB.... it would appear another step is needed to continue consuming memory past that.. hmmm\r\n\r\nAre the pa_tables held on to anywhere after they are iterated in the real code?\r\n\r\nin my hack, if you do a bunch cut & paste and then change the arr name for each iter \r\n\r\n```\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr = table[0]\r\n\r\nfor idx, x in enumerate(arr):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr1 = table[0]\r\n\r\nfor idx, x in enumerate(arr1):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr2 = table[0]\r\n\r\nfor idx, x in enumerate(arr2):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\n\r\nit leaks, if all arr are the same name (so prev one gets cleaned up) it does not and goes back to 0, anything that could be holding onto a reference of an intermediary equivalent like arr in the real use case?\r\n\r\n\r\n\r\n","Yes, we have already established here https:\/\/github.com\/huggingface\/datasets\/issues\/4883#issuecomment-1232063891 that when one iterates over the whole dataset multiple times, it consumes a bit more memory in the next few repetitions and then remains steady. \r\n\r\nWhich means that when a new iterator is created over the same dataset, all the memory from the previous iterator is re-used.\r\n\r\nSo the leak happens primarily when the iterator is \"drained\" the first time. which tells me that either a circular reference is created somewhere which only gets released when the iterator is destroyed, or there is some global variable that keeps piling up the memory and doesn't release it in time.\r\n\r\nAlso I noticed some `__del__` methods which won't destroy objects automatically and there is usually a warning against using it https:\/\/stackoverflow.com\/a\/1481512\/9201239\r\n\r\nThere are also some `weakref`s in the code which too may lead to leaks or weird problems at times.\r\n","@stas00 my point was, I'm not convinced @lhoestq last example illustrates the leak, but rather the differences between memory mapping and in memory usage patterns. If you destroy arr, memory map impl goes back to 0 each iteration. The amount of memory that 'looks' like it is leaked in first pass differes quite a bit between memory mapped vs in memory, but the underlying issue likely a circular reference, or reference(s) which were not cleaned up that would impact either case, but likely much more visible with mmap.","Thank you for clarifying, Ross. \r\n\r\nI think we agree that it's almost certain that the `datasets` iterator traps some inner variable that prevents object freeing, since if we create the iterator multiple times (and drain it) after a few runs no new memory is allocated. We could try to dig in more with `objgraph` - my main concern is if the problem happens somewhere outside of python, (i.e. in pyarrow cpp implementation) in which case it'd be much more difficult to trace. \r\n\r\nI wish there was a way on linux to tell the program to free no longer used memory at will.","FWIW, I revisted some code I had in the works to use HF datasets w\/ timm train & val scripts. There is no leak there across multipe epochs. It uses the defaults. \r\n\r\nIt's worth noting that with imagenet `keep_in_memory=True` isn't even an option because the train arrow file is ~140GB and my local memory is less. The virtual address space reflects mmap (> 150GB) and doesn't increase over epochs that I noticed. I have some perf issues to bring up wrt to the current setup, but that's a separate and lower prio discussion to have elsewhere...","# Notes \r\n\r\nAfter reading many issues and trying many things here is the summary of my learning\r\n\r\nI'm now using @lhoestq repro case as it's pyarrow-isolated: https:\/\/github.com\/huggingface\/datasets\/issues\/4883#issuecomment-1242034985\r\n\r\n\r\n## 1. pyarrow memory backends\r\n\r\nit has 3 backends, I tried them all with the same results\r\n\r\n```\r\npa.set_memory_pool(pa.jemalloc_memory_pool())\r\npa.set_memory_pool(pa.mimalloc_memory_pool())\r\npa.set_memory_pool(pa.system_memory_pool())\r\n```\r\n\r\n## 2. quick release\r\n\r\nThe `jemalloc` backend supports quick release\r\n\r\n```\r\npa.jemalloc_set_decay_ms(0)\r\n```\r\n\r\nit doesn't make any difference in this case\r\n\r\n## 3. actual memory allocations\r\n\r\nthis is a useful tracer for PA memory allocators\r\n```\r\npa.log_memory_allocations(enable=True)\r\n```\r\n\r\nit nicely reports memory allocations and releases when the arrow file is created the first time.\r\n\r\nbut when we then try to do `enumerate(arr)` this logger reports 0 allocations.\r\n\r\nThis summary also reports no allocations when the script run the second time (post file creation):\r\n```\r\nmem_pool = pa.default_memory_pool()\r\nprint(f\"PyArrow mem pool info: {mem_pool.backend_name} backend, {mem_pool.bytes_allocated()} allocated, \"\r\n f\"{mem_pool.max_memory()} max allocated, \")\r\n\r\nprint(f\"PyArrow total allocated bytes: {pa.total_allocated_bytes()}\")\r\n```\r\n\r\nHowever it's easy to see by using `tracemalloc` which only measures python's memory allocations that it's PA that leaks, since `tracemalloc` shows fixed memory\r\n\r\n(this is bolted on top of the original repro script)\r\n\r\n```\r\nimport tracemalloc\r\ntracemalloc.start()\r\n\r\n[...]\r\nfor idx, x in enumerate(arr):\r\n if idx % 10 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ 2**20\r\n mem_use = pa.total_allocated_bytes() - start_use\r\n mem_peak = pool.max_memory() - start_peak_use\r\n\r\n second_size, second_peak = tracemalloc.get_traced_memory()\r\n mem_diff = (second_size - first_size) \/ 2**20\r\n mem_peak_diff = (second_peak - first_peak) \/ 2**20\r\n\r\n # pa.jemalloc_memory_pool().release_unused()\r\n # pa.mimalloc_memory_pool().release_unused()\r\n # pa.system_memory_pool().release_unused()\r\n\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB {mem_diff:12.4f} {mem_peak_diff:12.4f} {memory_mapped_stream.size()\/2**20:4.4}MB {mem_use\/2**20:4.4}MB {mem_peak\/2**20:4.4}MB\")\r\n\r\n```\r\n\r\ngives:\r\n\r\n```\r\n 0 5.4258MB 0.0110 0.0201 195.3MB 0.0MB 0.0MB\r\n 10 25.3672MB 0.0112 0.0202 195.3MB 0.0MB 0.0MB\r\n 20 45.9336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 30 62.4336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 40 83.0586MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 50 103.6836MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 60 124.3086MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 70 140.8086MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 80 161.4336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 90 182.0586MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n```\r\n\r\nthe 3rd and 4th columns are `tracemalloc`'s report.\r\n\r\nthe 5th column is the size of mmaped stream - fixed.\r\n\r\nthe last 2 are the PA's malloc reports - you can see it's totally fixed and 0.\r\n\r\nSo what gives? PA's memory allocator says nothing was allocated and we can see python doesn't allocate any memory either.\r\n\r\nAs someone suggested in one of the PA issues that **IPC\/GRPC could be the issue.** Any suggestions on how debug this one?\r\n\r\nThe main issue is that one can't step through with a python debugger as `arr` is an opaque cpp object binded to python.\r\n\r\nPlease see the next comment for a possible answer.\r\n\r\n# ref-count\r\n\r\nI also traced reference counts and they are all fixed using either `sys.getrefcount(x)` or `len(gc.get_referrers(x))`\r\n\r\nso it's not the python object\r\n\r\n# Important related discussions\r\n\r\nhttps:\/\/issues.apache.org\/jira\/browse\/ARROW-11007 - looks very similar to our issue\r\nin particular this part of the report:\r\nhttps:\/\/issues.apache.org\/jira\/browse\/ARROW-11007?focusedCommentId=17279642&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17279642\r\n","# There is no leak, just badly communicated linux RSS memory usage stats\r\n\r\nNext, lets revisit @rwightman's suggestion that there is actually no leak.\r\n\r\nAfter all - we are using mmap which **will try to map** the file to RAM as much as it can and then page out if there is no memory. i.e. MMAP is only fast if you have a lot of CPU RAM.\r\n\r\nSo let's do it:\r\n\r\n# Memory mapping OOM test\r\n\r\nWe first quickly start a cgroups-controlled shell which will instantly kill any program that consumes more than 1GB of memory:\r\n\r\n```\r\n$ systemd-run --user --scope -p MemoryHigh=1G -p MemoryMax=1G -p MemorySwapMax=1G --setenv=\"MEMLIMIT=1GB\" bash\r\n```\r\n\r\nLet's check that it indeed does so. Let's change @lhoestq's script to allocate a 10GB arrow file:\r\n\r\n```\r\n$ python -c 'import pyarrow as pa; pa.array([b\"a\" * (2000 * 1024)] * 5000)'\r\nKilled\r\n```\r\noops, that didn't work, as we tried to allocate 10GB when only 1GB is allowed. This is what we want!\r\n\r\nLet's do a sanity check - can we allocate 0.1GB?\r\n```\r\npython -c 'import pyarrow as pa; pa.array([b\"a\" * (2000 * 1024)] * 50)'\r\n```\r\n\r\nYes. So the limited shell does the right thing. It let's allocate `< 1GB` of RSS RAM.\r\n\r\nNext let's go back to @lhoestq's script but with 10GB arrow file.\r\n\r\nwe change his repro script https:\/\/github.com\/huggingface\/datasets\/issues\/4883#issuecomment-1242034985 to 50x larger file\r\n```\r\n arr = pa.array([b\"a\" * (2000 * 1024)] * 5000) # ~10000MB\r\n```\r\nwe first have to run into a normal unlimited shell so that we don't get killed (as the script allocates 10GB)\r\n\r\nlet's run the script now in the 1GB-limited shell while running a monitor:\r\n\r\n```\r\n$ htop -F python -s M_RESIDENT -u `whoami`\r\n```\r\n\r\nso we have 2 sources of RSS info just in case.\r\n\r\n```\r\n$ python pyar.py\r\n 0 4.3516MB 0.0103 0.0194 9.766e+03MB 0.0MB 0.0MB\r\n 10 24.3008MB 0.0104 0.0195 9.766e+03MB 0.0MB 0.0MB\r\n[...]\r\n4980 9730.3672MB 0.0108 0.0199 9.766e+03MB 0.0MB 0.0MB\r\n4990 9750.9922MB 0.0108 0.0199 9.766e+03MB 0.0MB 0.0MB\r\nPyArrow mem pool info: jemalloc backend, 0 allocated, 0 max allocated,\r\nPyArrow total allocated bytes: 0\r\n```\r\n\r\nBut wait, it reported 10GB RSS both in `htop` and in our log!\r\n\r\nSo that means it never allocated 10GB otherwise it'd have been killed.\r\n\r\n**Which tells us that there is no leak whatsoever** and this is just a really difficult situation where MMAPPED memory is reported as part of RSS which it probably shouldn't. As now we have no way how to measure real memory usage.\r\n\r\nI also attached the script with all the different things I have tried in it, so it should be easy to turn them on\/off if you want to reproduce any of my findings.\r\n\r\n[pyar.txt](https:\/\/github.com\/huggingface\/datasets\/files\/9539430\/pyar.txt)\r\n\r\njust rename it to `pyra.py` as gh doesn't let attaching scripts...\r\n\r\n(I have to remember to exit that special mem-limited shell or else I won't be able to do anything serious there.)\r\n\r\n","The original leak in the multi-modal code is very likely something else. But of course now it'd be very difficult to trace it using mmap.\r\n\r\nI think to debug we have to set `keep_in_memory=True` in `load_from_disk` to load the small dataset in RAM, so there will be no mmap misleading reporting component and then continue searching for another source of a leak.","To add to what @stas00 found, I'm gonna leave some links to where I believe the confusion came from in pyarrow's APIs, for future reference:\r\n* In the section where they talk about [efficiently writing and reading arrow data](https:\/\/arrow.apache.org\/docs\/dev\/python\/ipc.html?#efficiently-writing-and-reading-arrow-data), they give an example of how \r\n\r\n> Arrow can directly reference the data mapped from disk and avoid having to allocate its own memory. \r\n\r\nAnd where their example shows 0 RSS memory allocation, the way we used to measure RSS shows 39.6719MB allocated. Here's the script to reproduce:\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nimport pyarrow as pa\r\nimport time\r\nimport sys\r\n\r\n# gc.set_debug(gc.DEBUG_LEAK)\r\n# gc.set_threshold(0,0,0)\r\n\r\n#pa.set_memory_pool(pa.mimalloc_memory_pool())\r\n#pa.set_memory_pool(pa.system_memory_pool())\r\n\r\nimport tracemalloc\r\n\r\n#pa.jemalloc_set_decay_ms(0)\r\n# pa.log_memory_allocations(enable=True)\r\n\r\nBATCH_SIZE = 10000\r\nNUM_BATCHES = 1000\r\nschema = pa.schema([pa.field('nums', pa.int32())])\r\nwith pa.OSFile('bigfile.arrow', 'wb') as sink:\r\n with pa.ipc.new_file(sink, schema) as writer:\r\n for row in range(NUM_BATCHES):\r\n batch = pa.record_batch([pa.array(range(BATCH_SIZE), type=pa.int32())], schema)\r\n writer.write(batch)\r\n\r\nstart_use = pa.total_allocated_bytes()\r\npool = pa.default_memory_pool()\r\nstart_peak_use = pool.max_memory()\r\ntracemalloc.start()\r\nfirst_size, first_peak = tracemalloc.get_traced_memory()\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss \/ 2**20\r\n\r\n\r\n# with pa.OSFile('bigfile.arrow', 'rb') as source:\r\n# loaded_array = pa.ipc.open_file(source).read_all()\r\n\r\nwith pa.memory_map('bigfile.arrow', 'rb') as source:\r\n loaded_array = pa.ipc.open_file(source).read_all()\r\n\r\n\r\nprint(\"LEN:\", len(loaded_array))\r\nprint(\"RSS: {}MB\".format(pa.total_allocated_bytes() >> 20))\r\n\r\ngc.collect()\r\ntime.sleep(0.1)\r\nmem_after = psutil.Process(os.getpid()).memory_info().rss \/ 2**20\r\nmem_use = pa.total_allocated_bytes() - start_use\r\nmem_peak = pool.max_memory() - start_peak_use\r\nsecond_size, second_peak = tracemalloc.get_traced_memory()\r\nmem_diff = (second_size - first_size) \/ 2**20\r\nmem_peak_diff = (second_peak - first_peak) \/ 2**20\r\n\r\nidx = 0\r\nprint(f\"{idx:4d} {mem_after - mem_before:12.4f}MB {mem_diff:12.4f} {mem_peak_diff:12.4f} {mem_use\/2**20:4.4}MB {mem_peak\/2**20:4.4}MB\")\r\n```\r\ngives:\r\n```\r\n\r\nLEN: 10000000\r\nRSS: 0MB\r\n 0 39.6719MB 0.0132 0.0529 0.0MB 0.0MB\r\n```\r\nWhich again just proves that we uncorrectly measure RSS, in the case of MMAPPED memory\r\n\r\n\r\n* [The recommended way to do memory profiling from Arrow's docs](https:\/\/arrow.apache.org\/docs\/dev\/cpp\/memory.html#memory-profiling)\r\n","@lhoestq, I have been working on a detailed article that shows that MMAP doesn't leak and it's mostly ready. I will share when it's ready.\r\n\r\nThe issue is that we still need to be able to debug memory leaks by turning MMAP off.\r\n\r\nBut, once I tried to show the user that using `load_dataset(... keep_in_memory=True)` is the way to debug an actual memory leak - guess I what I discovered? A potential actual leak.\r\n\r\nHere is the repro:\r\n\r\n```\r\n$ cat ds-mmap.py\r\nfrom datasets import load_dataset\r\nimport gc\r\nimport os\r\nimport psutil\r\n\r\nproc = psutil.Process(os.getpid())\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss \/ 2**20\r\n\r\ndataset = load_dataset(\"wmt19\", 'cs-en', keep_in_memory=True, streaming=False)['train']\r\n\r\nprint(f\"{'idx':>6} {'RSS':>10} {'\u0394 RSS':>15}\")\r\nstep = 20000\r\nfor i in range(0, 10*step, step):\r\n mem_before = mem_read()\r\n _ = dataset[i:i+step]\r\n mem_after = mem_read()\r\n print(f\"{i:6d} {mem_after:12.4f}MB {mem_after - mem_before:12.4f}MB \")\r\n```\r\n\r\n```\r\npython ds-io.py\r\nReusing dataset wmt19 (\/home\/stas\/.cache\/huggingface\/datasets\/wmt19\/cs-en\/1.0.0\/c3db1bf4240362ed1ef4673b354f468d70aac66d4e67d45f536d493a0840f0d3)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 5.66it\/s]\r\n idx RSS \u0394 RSS\r\n 0 1398.4609MB 3.5195MB\r\n 20000 1398.5742MB 0.1133MB\r\n 40000 1398.6016MB 0.0273MB\r\n 60000 1398.6016MB 0.0000MB\r\n 80000 1398.6016MB 0.0000MB\r\n100000 1398.6328MB 0.0312MB\r\n120000 1398.6953MB 0.0625MB\r\n140000 1398.6953MB 0.0000MB\r\n160000 1398.7500MB 0.0547MB\r\n180000 1398.7500MB 0.0000MB\r\n```","as I suggested on slack perhaps it was due to dataset records length variation, so with your help I wrote another repro with synthetic records which are all identical - which should remove my hypothese from the equation and we should expect 0 incremental growth as we iterate over the datasets. But alas this is not the case. There is a tiny but definite leak-like behavior.\r\n\r\nHere is the new repro:\r\n\r\n```\r\n$ cat ds-synthetic-no-mmap.py\r\nfrom datasets import load_from_disk, Dataset\r\nimport gc\r\nimport sys\r\nimport os\r\nimport psutil\r\n\r\nproc = psutil.Process(os.getpid())\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss \/ 2**20\r\n\r\nDS_PATH = \"synthetic-ds\"\r\nif not os.path.exists(DS_PATH):\r\n records = 1_000_000\r\n print(\"Creating a synthetic dataset\")\r\n row = dict(foo=[dict(a='a'*500, b='b'*1000)])\r\n ds = Dataset.from_dict({k: [v] * records for k, v in row.items()})\r\n ds.save_to_disk(DS_PATH)\r\n print(\"Done. Please restart the program\")\r\n sys.exit()\r\n\r\ndataset = load_from_disk(DS_PATH, keep_in_memory=True)\r\nprint(f\"Dataset len={len(dataset)}\")\r\n\r\nprint(f\"{'idx':>8} {'RSS':>10} {'\u0394 RSS':>15}\")\r\nmem_start = 0\r\nstep = 25_000\r\nwarmup_iterations = 4\r\nfor idx, i in enumerate(range(0, len(dataset), step)):\r\n if idx == warmup_iterations: # skip the first few iterations while things get set up\r\n mem_start = mem_read()\r\n mem_before = mem_read()\r\n _ = dataset[i:i+step]\r\n mem_after = mem_read()\r\n print(f\"{i:8d} {mem_after:12.4f}MB {mem_after - mem_before:12.4f}MB\")\r\nmem_end = mem_read()\r\n\r\nprint(f\"Total diff: {mem_end - mem_start:12.4f}MB (after {warmup_iterations} warmup iterations)\")\r\n```\r\n\r\nand the run:\r\n\r\n```\r\n$ python ds-synthetic-no-mmap.py\r\nDataset len=1000000\r\n idx RSS \u0394 RSS\r\n 0 1601.9258MB 47.9688MB\r\n 25000 1641.6289MB 39.7031MB\r\n 50000 1641.8594MB 0.2305MB\r\n 75000 1642.1289MB 0.2695MB\r\n 100000 1642.1289MB 0.0000MB\r\n 125000 1642.3789MB 0.2500MB\r\n 150000 1642.3789MB 0.0000MB\r\n 175000 1642.6289MB 0.2500MB\r\n 200000 1642.6289MB 0.0000MB\r\n 225000 1642.8789MB 0.2500MB\r\n 250000 1642.8828MB 0.0039MB\r\n 275000 1643.1328MB 0.2500MB\r\n 300000 1643.1328MB 0.0000MB\r\n 325000 1643.3828MB 0.2500MB\r\n 350000 1643.3828MB 0.0000MB\r\n 375000 1643.6328MB 0.2500MB\r\n 400000 1643.6328MB 0.0000MB\r\n 425000 1643.8828MB 0.2500MB\r\n 450000 1643.8828MB 0.0000MB\r\n 475000 1644.1328MB 0.2500MB\r\n 500000 1644.1328MB 0.0000MB\r\n 525000 1644.3828MB 0.2500MB\r\n 550000 1644.3828MB 0.0000MB\r\n 575000 1644.6328MB 0.2500MB\r\n 600000 1644.6328MB 0.0000MB\r\n 625000 1644.8828MB 0.2500MB\r\n 650000 1644.8828MB 0.0000MB\r\n 675000 1645.1328MB 0.2500MB\r\n 700000 1645.1328MB 0.0000MB\r\n 725000 1645.3828MB 0.2500MB\r\n 750000 1645.3828MB 0.0000MB\r\n 775000 1645.6328MB 0.2500MB\r\n 800000 1645.6328MB 0.0000MB\r\n 825000 1645.8828MB 0.2500MB\r\n 850000 1645.8828MB 0.0000MB\r\n 875000 1646.1328MB 0.2500MB\r\n 900000 1646.1328MB 0.0000MB\r\n 925000 1646.3828MB 0.2500MB\r\n 950000 1646.3828MB 0.0000MB\r\n 975000 1646.6328MB 0.2500MB\r\nTotal diff: 4.5039MB (after 4 warmup iterations)\r\n```\r\nso I'm still not sure why we get this.\r\n\r\nAs you can see I started skipping the first few iterations where memory isn't stable yet. As the actual diff is much larger if we count all iterations.\r\n\r\nWhat do you think?","@stas00 my 2 cents from having looked at a LOT of memory leaks over the years, esp in Python, .3% memory increase over that many iterations of something is difficult to say with certainty it is a leak. \r\n\r\nAlso, just looking at RSS makes it hard to analyze leaks. RSS can stay near constant while you are leaking. RSS is paged in mem, if you have a big leak your RSS might not increase much (leaked mem tends not to get used again so often paged out) while your virtual page allocation could be going through the roof...","yes, that's true, but unless the leak is big, I'm yet to find another measurement tool.\r\n\r\nTo prove your point here is a very simple IO in a loop program that also reads the same line all over again:\r\n\r\n```\r\n$ cat mmap-no-leak-debug.py\r\nimport gc\r\nimport mmap\r\nimport os\r\nimport psutil\r\nimport sys\r\n\r\nproc = psutil.Process(os.getpid())\r\n\r\nPATH = \".\/tmp.txt\"\r\n\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss \/ 2**20\r\n\r\n# create a large data file with a few long lines\r\nif not os.path.exists(PATH):\r\n with open(PATH, \"w\") as fh:\r\n s = 'a'* 2**27 + \"\\n\" # 128MB\r\n # write ~2GB file\r\n for i in range(16):\r\n fh.write(s)\r\n\r\nprint(f\"{'idx':>4} {'RSS':>10} {'\u0394 RSS':>12} {'\u0394 accumulated':>10}\")\r\n\r\ntotal_read = 0\r\ncontent = ''\r\nmem_after = mem_before_acc = mem_after_acc = mem_before = proc.memory_info().rss \/ 2**20\r\nprint(f\"{0:4d} {mem_after:10.2f}MB {mem_after - 0:10.2f}MB {0:10.2f}MB\")\r\n\r\nmmap_mode = True if \"--mmap\" in sys.argv else False\r\n\r\nwith open(PATH, \"r\") as fh:\r\n\r\n if mmap_mode:\r\n mm = mmap.mmap(fh.fileno(), 0, access=mmap.ACCESS_READ)\r\n\r\n idx = 0\r\n while True:\r\n idx += 1\r\n mem_before = mem_read()\r\n line = mm.readline() if mmap_mode else fh.readline()\r\n if not line:\r\n break\r\n\r\n #total_read += len(line)\r\n\r\n if \"--accumulate\" in sys.argv:\r\n mem_before_acc = mem_read()\r\n content += str(line)\r\n mem_after_acc = mem_read()\r\n\r\n mem_after = mem_read()\r\n\r\n print(f\"{idx:4d} {mem_after:10.2f}MB {mem_after - mem_before:10.2f}MB {mem_after_acc - mem_before_acc:10.2f}MB\")\r\n```\r\n\r\nit has some other instrumentations to do mmap and accumulate data, but let's ignore that for now.\r\n\r\nHere it is running in a simple non-mmap IO:\r\n\r\n```\r\n$ python mmap-no-leak-debug.py\r\n idx RSS \u0394 RSS \u0394 accumulated\r\n 0 12.43MB 12.43MB 0.00MB\r\n 1 269.72MB 257.29MB 0.00MB\r\n 2 269.73MB 0.02MB 0.00MB\r\n 3 269.73MB 0.00MB 0.00MB\r\n 4 269.74MB 0.01MB 0.00MB\r\n 5 269.74MB 0.00MB 0.00MB\r\n 6 269.75MB 0.01MB 0.00MB\r\n 7 269.75MB 0.00MB 0.00MB\r\n 8 269.76MB 0.01MB 0.00MB\r\n 9 269.76MB 0.00MB 0.00MB\r\n 10 269.77MB 0.01MB 0.00MB\r\n 11 269.77MB 0.00MB 0.00MB\r\n 12 269.77MB 0.00MB 0.00MB\r\n 13 269.77MB 0.00MB 0.00MB\r\n 14 269.77MB 0.00MB 0.00MB\r\n 15 269.77MB 0.00MB 0.00MB\r\n 16 146.02MB -123.75MB 0.00MB\r\n```\r\n\r\nas you can see even this super-simplistic program that just performs `readline()` slightly increases in RSS over iterations.\r\n\r\nIf you have a better tool for measurement other than RSS, I'm all ears.","@stas00 if you aren't using memory maps, you should be able to clearly see the increase in the virtual mem for the process as well. Even then, it could still be challenging to determine if it's leak vs fragmentation due to problematic allocation patterns (not uncommon with Python). Using a better mem allocator like tcmalloc via LD_PRELOAD hooks could reduce impact of fragmentation across both Python and c libs. Not sure that plays nice with any allocator that arrow might use itself though. "],"created_at":1661330574000,"updated_at":1663302928000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nWhen the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant. \r\n\r\n## Steps to reproduce the bug\r\nRun and observe the output of this snippet which logs RSS memory.\r\n```python\r\nimport psutil\r\nimport os\r\nfrom transformers import BertTokenizer\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nBATCH_SIZE = 32\r\nNUM_TRIES = 10\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndef transform(x):\r\n x.update(tokenizer(x[\"text\"], return_tensors=\"pt\", max_length=64, padding=\"max_length\", truncation=True))\r\n x.pop(\"text\")\r\n x.pop(\"label\")\r\n return x\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\ndataset.set_transform(transform)\r\ntrain_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\ncount = 0\r\nwhile count < NUM_TRIES:\r\n for idx, batch in enumerate(train_loader):\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ (1024 * 1024)\r\n print(count, idx, mem_after - mem_before)\r\n count += 1\r\n```\r\n\r\n## Expected results\r\nMemory should not increase after initial setup and loading of the dataset\r\n\r\n## Actual results\r\nMemory continuously increases as can be seen in the log. \r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10\r\n- Python version: 3.8.13\r\n- PyArrow version: 7.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4883\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":2},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4883\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4882","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4882\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4882\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4882\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4882","id":1348913665,"node_id":"PR_kwDODunzps49sRtv","number":4882,"title":"Fix language tags resource file","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4882). All of your documentation changes will be reflected on that endpoint."],"created_at":1661321161000,"updated_at":1661349513000,"closed_at":1661349510000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR fixes\/updates\/adds ALL language tags from IANA (as of 2022-08-08).\r\n\r\nThis PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script\/region\/variant suffixes). See:\r\n- #4753","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4882\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4882\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4882","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4882","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4882.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4882.patch","merged_at":1661349510000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4881","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4881\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4881\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4881\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4881","id":1348495777,"node_id":"I_kwDODunzps5QYGmh","number":4881,"title":"Language names and language codes: connecting to a big database (rather than slow enrichment of custom list)","user":{"login":"alexis-michaud","id":6072524,"node_id":"MDQ6VXNlcjYwNzI1MjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6072524?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alexis-michaud","html_url":"https:\/\/github.com\/alexis-michaud","followers_url":"https:\/\/api.github.com\/users\/alexis-michaud\/followers","following_url":"https:\/\/api.github.com\/users\/alexis-michaud\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alexis-michaud\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alexis-michaud\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alexis-michaud\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alexis-michaud\/orgs","repos_url":"https:\/\/api.github.com\/users\/alexis-michaud\/repos","events_url":"https:\/\/api.github.com\/users\/alexis-michaud\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alexis-michaud\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface\/moon-landing ","on the Hub side, there is not fine grained validation we just check that `language:` contains an array of lowercase strings between 2 and 3 characters long =)\r\n\r\nand for `language_bcp47:` we just check it's an array of strings.\r\n\r\nThe only page where we have a hardcoded list of languages is https:\/\/huggingface.co\/languages and I've been thinking of hooking that page on an external database of languages (so any suggestion is super interesting), but it's not used for validation.\r\n\r\nThat being said, in `datasets` this file https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/utils\/resources\/languages.json is not really used no? Or just in the tagging tool? What about just removing it?\r\n\r\nalso cc'ing @lbourdois who's been active and helpful on those subjects in the past!","PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n\r\ncc @albertvillanova too","> PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n> \r\n> cc @albertvillanova too\r\n\r\nMany thanks for your answer! \r\n\r\nThe Glottolog database is kept up to date, and has information on the closest ISO code for each Glottocode. So providing a clean table with equivalences sounds (to me) like something perfectly reasonable to expect from their team. \r\nTo what extent would [pyglottolog](https:\/\/github.com\/glottolog\/pyglottolog) fit the bill \/ do the job? (API documentation [here](https:\/\/pyglottolog.readthedocs.io\/en\/latest\/index.html)) I'm reaching my technical limitations here: I can't assess the distance between what they offer and what the HF team needs. \r\nI have opened an Issue in [their repo](https:\/\/github.com\/glottolog\/glottolog-cldf\/issues\/13). \r\n\r\nVery interested to see where it goes from there.","I just tried pyglottolog to generate a file with all the current IDs (first column).\r\n\r\n`glottolog languoids` inside the `glottolog` repository.\r\n\r\n[glottolog-languoids-v4.6-10-g5c66eec874.csv](https:\/\/github.com\/huggingface\/datasets\/files\/9417456\/glottolog-languoids-v4.6-10-g5c66eec874.csv)\r\n\r\n","Greetings @alexis-michaud and others,\r\nI think perhaps a standards-based approach here would help everyone out both at the technical and social layers of technical innovations. \r\n\r\nLet me say a few things: \r\n1. there are multiple kinds of assets in AI that should have associated language codes. \r\n * AI Training Data sets\r\n * AI models\r\n * AI outputs\r\nThese are all distinct components which should be tagged for the language and encoding methods they operate on or enhance. For example, an AI based cross-language tool from French to English (UK variety) still needs to consider if it is operating on oral language speech or written text. This is where [IANA language sub-tags](https:\/\/www.iana.org\/assignments\/language-subtag-registry\/language-subtag-registry) come in and are so important. I link to the official source. If one wants to use middleware such as a python package or npm package to manage strings then please make sure those packages are updating codes as they are being revised. I see that @julien-c mentioned BCP-47. BCP-47 is the current standard for language tagging. Following it will make the resources you create more findable and let future users better understand or expect any biases which may have been introduced in the different AI based products.\r\n2. BCP-47 is a technical read. However, you will notice that it identifies when to use an ISO 639-1, ISO 639-2, or ISO 639-3. code. This is important for interoperability with many systems. If you are using library systems then you should likely just stick with ISO 639-3 codes.\r\n3. If you are going to use Glottolog codes use them after an `-x-` tag in the BCP-47 format to maintain BCP-47 validity. \r\n4. You should source ISO 639-3 codes directly from the [ISO 639-3 registrar](https:\/\/iso639-3.sil.org\/code_tables\/639\/data) as these codes are updated annually, usually in February or March. ISO 639-3 codes have multiple classes: `Active`, `Deprecated`, and `Unassigned`. This means that string length checking is not a sufficient strategy for validation.\r\n5. The names of smaller languages often change depending on the language used to describe them. The [ISO639-2 documentation](https:\/\/www.loc.gov\/standards\/iso639-2\/php\/code_list.php) has a list of language names for languages with smaller populations for languages in which descriptions about these languages are often written. For example, ISO 639-2's documentation contains the names of languages as they are used in French, German, and English. ISO 639-2 rarely is updated as it is now tied to ISO 639-3's evolution and modern systems should just use ISO 639-3, but these additional names of languages in other languages may not appear in the ISO 369-3 tables.\r\n6. Glottolog codes are also updated at least annually. Usually sometime after ISO 639-3 updates.\r\n7. Please, if the material is in a written mode, please indicate which script is used unless the IANA field has a `suppress script` value. Please use the script tag that BCP-47 calls for from [ISO 15924](https:\/\/unicode.org\/iso15924\/iso15924-codes.html). This also updates at least annually. \r\n8. Another great place to look for language names is the [Unicode CLDR database for locales](https:\/\/cldr.unicode.org\/translation\/displaynames\/languagelocale-names). These ought to be congruent with ISO 639-3 but, sometimes CLDR has additional references to languages (such as the french name for a language) which is not contained in ISO 639-2 or ISO 639-3.\r\n9. Wikidata for language names is not always a great source of authoritative information. Language names are asymmetrical. Many times they are contrived because there is no actual name for the language in the language referring... e.g. French doesn't have a name for every language in the world, often they say something like: the language of 'x' people. \u2014 English does the same. When a language name standard does not have the best name for a language the best way to handle that is to make a change request with the standards registrar. Keeping track of the source list and the version of your source list for your language codes is very important. \r\n10. Finally, It would be a great service to technologist, minority language communities, and linguists if for all resources of the three types mentioned in number 1 above you added a record to [OLAC](http:\/\/www.language-archives.org\/). \u2014 I can help you with that. OLAC is a search interface for language resources.\r\n","Hi everybody!\r\n\r\nAbout the point:\r\n> also cc'ing @lbourdois who's been active and helpful on those subjects in the past!\r\n\r\nDiscussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: https:\/\/github.com\/huggingface\/hub-docs\/issues\/193\r\nOnce this system has been redone and satisfies the identified needs, a redesign of the [Languages page](https:\/\/huggingface.co\/languages) would also be relevant: https:\/\/github.com\/huggingface\/hub-docs\/issues\/194. \r\nI invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI\/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties\/regionalisms within a language (https:\/\/huggingface.co\/datasets\/AmazonScience\/massive\/discussions\/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\n\r\nTo return to the present discussion, thank you for the various databases and methodologies you mention. It makes a big difference to have linguists in the loop \ud83d\ude80.\r\n\r\nI have a couple of questions where I think an expert perspective would be appreciated:\r\n- Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\nFor example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https:\/\/huggingface.co\/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https:\/\/arxiv.org\/pdf\/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\n- When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https:\/\/www.endangeredlanguages.com\/?hl=en) and [Linguasphere](http:\/\/www.linguasphere.info\/jr\/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\n- On the Hub, there is the following dataset where French people speak in English: https:\/\/huggingface.co\/datasets\/Datatang\/French_Speaking_English_Speech_Data_by_Mobile_Phone \r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\nBased on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n- Are there any databases that take into account all the existing sign languages in the world?\r\nIt would be nice to have them included on the Hub.\r\n\r\n- Is there an international classification of languages?\r\nA bit like the [International Classification of Diseases](https:\/\/en.wikipedia.org\/wiki\/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later. \r\n\r\n- Finally for the CNRS team, when can we expect to see all the datasets of [Pangloss](https:\/\/pangloss.cnrs.fr\/) on HF? \ud83d\udc40 And I don't know if you have a way to help to add also the datasets of [CoCoON](https:\/\/cocoon.huma-num.fr\/exist\/crdo\/).","> I invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI\/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties\/regionalisms within a language (https:\/\/huggingface.co\/datasets\/AmazonScience\/massive\/discussions\/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\nOne comment on this fall back system (which generally follows the BCP-47 process). ISO 639-2 has some codes which refer to a language ambiguously. For example, I believe code `ara` is used for arabic. In some contexts arabic is considered a single language, however, Egyptian Arabic is quite different from Moroccan Arabic, which are both considered separate languages. These ambiguous codes are valid ISO 639-3 codes but they have a special status. They are called `macro codes`. They exist inside the ISO 639-3 standard to provide absolute fallback compatibility between ISO 639-2 and ISO 639-3. However, when considering AI and MT applications with language data, the unforeseen potential applications and the potential for bias using macro codes should be avoided for new applications of language tags to resources. For historical cases where it is not clear what resources were used to create the AI tools or datasets then I understand the use of ambiguous tag uses. So for clarity in language tagging I suggest:\r\n\r\n1. Strictly following BCP-47\r\n2. Whenever possible avoid the use of macro tags in the ISO 639-3 standard. These are BCP-47 valid, but could introduce biases in the application of their use in society. (Generally there are more specific tags available to use in the ISO 639-3 standard.)","> * Are there any databases that take into account all the existing sign languages in the world?\r\n> It would be nice to have them included on the Hub.\r\n\r\nSign Languages present an interesting case. As I understand the situation. The identification of sign languages has been identified as a component of their endangerment. Some sign languages do exist in ISO 639-3. For further discussion on the issue I refer readers to the following publications: \r\n\r\n* https:\/\/doi.org\/10.3390\/languages7010049\r\n* https:\/\/www.academia.edu\/35870983\/The_ethics_of_of_language_identification_and_ISO_639\r\n\r\nOne way to be BCP-47 compliant and identify a sign language which is not identified in any of the BCP-47 referenced standards is to use the ISO 639-3 code for undetermined language `und` and then apply a custom suffix indicator (as explained in BCP-47) `-x-` and a custom code, such as the ones used in https:\/\/doi.org\/10.3390\/languages7010049","> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https:\/\/en.wikipedia.org\/wiki\/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nYes that would be the function of ISO 639-3. It is the reference standard for languages. It includes a code and its name and the status of the code. Many technical metadata standards for file and computer interoperability reference it, many technical library metadata standards reference it. Some linguists use it. Many governments reference it. \r\n\r\nIndexing diseases are different from indexing languages in several ways, one way is that diseases are the impact of a pathogen not the pathogen itself. If we take COVID-19 as an example, there are many varieties of the pathogen but broadly speaking there is only one disease \u2014 with many symptoms.\r\n\r\n",">* When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https:\/\/www.endangeredlanguages.com\/?hl=en) and [Linguasphere](http:\/\/www.linguasphere.info\/jr\/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nWhile these do appear on wikipedia, I don't know of any information system which uses these codes. I do know that glottolog did import ELP data at one time and its database does contain ELP data I'm not sure if Glottolog regularly ingests new versions of ELP data. I suspect that the use of Linguasphere data may be relevant to users of wikidata as a linked data attribute but I haven't heard of any linked data projects using Linguasphere data for analysis or product development. My impression is that it is fairly unused.","> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n>For example (I'm taking the case of Hebrew but this has happened for other languages) I [tag](https:\/\/huggingface.co\/models?language=iw&sort=downloads)ged Google models with the \"iw\" tag because I based it on what the authors gave in their [paper](https:\/\/arxiv.org\/pdf\/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\nYes. You can parse the IANA file linked to above (it is regularly updated). All deprecated tags are marked as such in that file. The new prefered tag if there is one, is indicated. ISO 639-3 also indicates a code's status but their list is relevant only codes within their domain (ISO 639-3).","> * On the Hub, there is the following dataset where French people speak in English: https:\/\/huggingface.co\/datasets\/Datatang\/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n\r\nI would interpret `en-fr` as english as spoken in France. `fr`in this position refers to the geo-political entity not a second language. I see no reason that other linguists should have a different option after having read BCP-47 and understood how it works.\r\n\r\nThe functional goal here is to tag a language resource as being produced by nonnative speakers, while tagging both languages. There are several problems here. The first is that BCP-47 has no way explicit way to do this. One could use the sub code `x-` with a private use code to indicate a second language and infer some meaning as to that language's role. However, there is another problem here which complexifies the situation greatly... how do we know that those english speakers (in France, or from France, or who were native French speakers) were not speaking their third or fourth language rather than their second language. So to conceptualize a sub-tag which indicates the first language of a speech act for speakers in a second (or other) language would need to be carefully crafted. It might then be proposed to the appropriate authorities. For example three sub-tags exist.\r\n\r\nThere are three registered sub-tags out of a BCP-47 allowed 35. These are `x-`, `u-`, and `t-`. `u-` and `t-` are defined in [RFC6067 ](https:\/\/www.rfc-editor.org\/rfc\/rfc6067)and [RFC6497](https:\/\/www.rfc-editor.org\/rfc\/rfc6497) . For more information see the [Unicode CLDR documentation](https:\/\/cldr.unicode.org\/index\/bcp47-extension) where it says: \r\n\r\n\r\n>[IETF BCP 47 ](http:\/\/www.google.com\/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t)[Tags for Identifying Languages](http:\/\/www.google.com\/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t) defines the language identifiers (tags) used on the Internet and in many standards. It has an extension mechanism that allows additional information to be included. The Unicode Consortium is the maintainer of the extension \u2018u\u2019 for Locale Extensions, as described in [rfc6067](https:\/\/www.google.com\/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6067&sa=D&sntz=1&usg=AOvVaw0gGWi0EjHfy1WId8k8oKAi), and the extension 't' for Transformed Content, as described in [rfc6497](https:\/\/www.google.com\/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6497&sa=D&sntz=1&usg=AOvVaw0w-OUsFX1PtaKYIq31P64I).\r\n>\r\n>The subtags available for use in the 'u' extension provide language tag extensions that provide for additional information needed for identifying locales. The 'u' subtags consist of a set of keys and associated values (types). For example, a locale identifier for British English with numeric collation has the following form: en-GB-u-kn-true\r\n>\r\n>The subtags available for use in the 't' extension provide language tag extensions that provide for additional information needed for identifying transformed content, or a request to transform content in a certain way. For example, the language tag \"ja-Kana-t-it\" can be used as a content tag indicates Japanese Katakana transformed from Italian. It can also be used as a request for a given transformation.\r\n>\r\n>For more details on the valid subtags for these extensions, their syntax, and their meanings, see LDML Section 3.7 [Unicode BCP 47 Extension Data](http:\/\/www.google.com\/url?q=http%3A%2F%2Fwww.unicode.org%2Freports%2Ftr35%2F%23Locale_Extension_Key_and_Type_Data&sa=D&sntz=1&usg=AOvVaw0lMthb9KbTJtoOd5mvv3Ha).","Hi @lbourdois ! Many thanks for the detailed information.\r\n\r\n> Discussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: [huggingface\/hub-docs#193](https:\/\/github.com\/huggingface\/hub-docs\/issues\/193) \r\nFascinating topic! To me, the following suggestion has a lot of appeal:\r\n\"if consider that it was necessary to create an ISO 639-3 because ISO 639-1 was deficient, it would be to do the reverse and thus convert the tags from ISO 639-1 to ISO 639-2 or 3 (https:\/\/en.wikipedia.org\/wiki\/List_of_ISO_639-1_codes or https:\/\/iso639-3.sil.org\/code_tables\/639\/data).\"\r\n\r\nYes, ISO 639-1 is unsuitable because it has so few codes: less than 200. To address linguistic diversity in 'unrestricted mode', a list of all languages is wanted. \r\n\r\nThe idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47). \r\n\r\nRetaining the authors' original tags and language names would be best. \r\n* For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'. \r\n* For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those. \r\n\r\nThus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost. \r\n\r\nAre industry practices so conservative that many people are happy with two-letter codes, and consider ISO 639-3 three-letter codes an unnecessary complication? That would be a pity, since there are so many advantages to using longer lists. (Somewhat like the transition to Unicode: sooo much better!) But maybe that conservative attitude _is_ widespread, and it would then need to be taken into account. In which case, one could consider offering two-letter codes as a search option. Internally, the search engine would look up the corresponding 3-letter codes, and produce the search results accordingly. \r\n\r\nNow to the other questions:\r\n\r\n> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n> For example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https:\/\/huggingface.co\/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https:\/\/arxiv.org\/pdf\/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\nI guess that the above suggestion takes care of this case. The original tag (in this example, \"iw\") is retained (facilitating cross-reference with the published paper, and respecting the real: the way the dataset was originally tagged). This old tag goes into the `BCP-47` field of the dataset, which can handle quirks & oddities like this one. And a new tag is added in the `ISO 639-3` field: the 3-letter code \"heb\". \r\n\r\n> * When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https:\/\/www.endangeredlanguages.com\/?hl=en) and [Linguasphere](http:\/\/www.linguasphere.info\/jr\/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nI'm afraid I never heard about Linguasphere. The [online register for Linguasphere (PDF)](http:\/\/www.linguasphere.info\/jr\/pdf\/index\/LS_index_n-n.pdf) seems to be from 1999-2000. It seems that the level of interoperability is not very high right now. (By contrast, Glottolog has [pyglottolog](https:\/\/github.com\/glottolog\/pyglottolog) and in my experience contacts flow well.) \r\n\r\nThe Endangered Languages Project is something Google started but initially did not 'push' very strongly, it seems. Just airing an opinion on the public Internet, it seems that the project is now solidly rooted at University of Hawai\u02bbi at M\u0101noa. It seems that they do not generate codes of their own. They refer to ISO 639-3 (Ethnologue) as a code authority when applicable, and otherwise provide comments in so many words, such as that language L currently lacks an Ethnologue code of its own (example [here](https:\/\/www.endangeredlanguages.com\/lang\/10624)). \r\n\r\n> * On the Hub, there is the following dataset where French people speak in English: https:\/\/huggingface.co\/datasets\/Datatang\/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n> Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n> Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\nYes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields. \r\n\r\n> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https:\/\/en.wikipedia.org\/wiki\/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nAs I understand, Ethnologue and Glottolog both try to do that, each in its own way. The simile with diseases seems interesting, to some extent: in both cases it's about human classification of phenomena that have complexity (though some diseases are simpler than others, whereas all languages have much complexity, in different ways).\r\n\r\n> * Finally, when can we expect to see all the datasets of [Pangloss](https:\/\/pangloss.cnrs.fr\/) on HF? eyes And I don't know if you have a way to help to add also the datasets of [CoCoON](https:\/\/cocoon.huma-num.fr\/exist\/crdo\/).\r\n\r\nThree concerns: (i) Technical specifications: we have not yet received feedback on the Japhug and Na datasets in HF. There may be technical considerations that we have not yet thought of and that would need to be taken into account before 'bulk upload'. (ii) Would there be a way to automate the process? The way @BenjaminGalliot did it for Japhug and Na, there was a manual component involved, and doing it by hand for all 200 datasets would not be an ideal workflow, given that the metadata are all clearly arranged. (iii) Some datasets are currently under a 'No derivatives' CreativeCommons license. We could go back to the depositors and argue that the 'No derivatives' mention were best omitted (see [here a similar argument about publications](https:\/\/creativecommons.org\/2020\/04\/21\/academic-publications-under-no-derivatives-licenses-is-misguided\/)): again, we'd want to be sure about the way forward before we set the process into motion.\r\n\r\nOur hope would be that some colleagues try out the [OutilsPangloss](https:\/\/gitlab.com\/lacito\/outilspangloss) download tool, assemble datasets from Pangloss\/Cocoon as they wish, then deposit them to HF.","> The idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47).\r\n> \r\n> Retaining the authors' original tags and language names would be best.\r\n> \r\n> * For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'.\r\n> * For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those.\r\n> \r\n> Thus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost.\r\n\r\n@alexis-michaud raises an excellent point. Language Resource users have varying search habits (or approaches). This includes cases where two or more language names refer to a single language. A search utility\/interface needs to be flexible and able to present results from various kinds of input in the search process. This could be like how the terms French\/Fran\u00e7ais\/Franzosisch (en\/fr\/de) are names for the same language or it could be a variety of the following: autoglottonyms (how the speakers of the language refer to their language), or exoglottonyms (how others refer to the language). Additionally, in web based searches I have also needed to implement diacritic sensitive and insensitive logic so that users can type with or without diacritics and not have results unnecessarily excluded. \r\n\r\nDepending on how detailed of a search problem HF seeks to solve. It may be better to off load complex search to search engines like OLAC which aggregate a lot of language resources. \u2014 as I mentioned above I can assist with the informatics on creating an OLAC feed.\r\n\r\nAbstracting search logic from actual metadata may prove a useful way to lower the technical debt overhead. Technical tools and library standards use ISO and BCP-47 Standards. So, from a bibliographic metadata perspective this seems to be the way forward with the widest set of use cases. ","To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https:\/\/huggingface.co\/spaces\/lbourdois\/Language-tags-demo. \r\nThe code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up. \r\n\r\nThis application is divided into 3 points:\r\n- The first is to enter a language in natural language to get its code which can then be filled in the YAML file of the README.MD files of the HF datasets or models in order to be referenced and found by everyone.\r\nIn practice, enter the language (e.g: `English`) you are interested in to get its associated tag (e.g: `en`). You can enter several languages by separating them with a comma (e.g `French,English,German`). You will be given priority to the ISO 639-3 code if it exists otherwise the Glottocode or the BCP47 code (for varieties in particular). If none of these codes are available, it links to a page where the user can contact HF to request to add this tag. \r\nIf you enter a BCP47 code, it must be entered as follows: `Language(Territory)`, for example `French(Canada)`. Attention! If you enter a BCP-47 language, it must be entered first, otherwise the plant code will be displayed. I have to fix this problem but I am moving to a new place, I don't have an internet connection when I want and I prefer to push this first version so that you can already test things now and not have to wait days or weeks.\r\nThis point is intended to simulate the user's side of the equation, which wonders which tag he should fill in for his language.\r\n\r\n\r\n- The second is to enter a language code to obtain the name of the language in natural language.\r\nIn practice, enter the tag (ISO 639-1\/2\/3, Glottolog or BCP-47) you are interested in (e.g: `fra`) to get its associated language (e.g: French). You can enter several languages by separating them with a comma (e.g `fra,eng,deu`). Attention! If you enter a BCP-47 code, it must be entered first, otherwise the plant code will be displayed. Same as the other bug above (it's actually the same one).\r\nThis point is intended to simulate the side of HF that for a given tag must return the correct language.\r\n\r\n\r\n\r\nTo code these two points, I tested two approaches. \r\n\r\n1. The first one (internal DB in the app) consists in querying a database that HF would have locally at their place. To create this database, I merged the ISO 639 database (https:\/\/iso639-3.sil.org\/sites\/iso639-3\/files\/downloads\/iso-639-3.tab) and the Glottolog database (https:\/\/glottolog.org\/meta\/downloads). The result of this merge is visible in the 3rd point of the application qui is an overview of the database.\r\nIn the image below, on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n![image](https:\/\/user-images.githubusercontent.com\/58078086\/188433217-bf7cb606-7af4-40b5-861f-ed662468f6e4.png)\r\n\r\n\r\nFor BCP 47 codes of the type `fr-CA`, I have retrieved the ISO-3166 alpha 1 codes of the territories (https:\/\/www.iso.org\/iso-3166-country-codes.html).\r\nIn practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https:\/\/cldr.unicode.org\/translation\/displaynames\/languagelocale-name-patterns for the pattern that came up.\r\n\r\n\r\n2. The second approach (with langcodes lib in the app) consists in using the Python `langcodes` library (https:\/\/github.com\/rspeer\/langcodes) which offers a lot of features in ready-made functions. It manages for example deprecated codes, the validity of an entered code, gives languages from code in the language of your choice (by default in English, but also autoglottonyms), etc. I invite you to read the README of the library. The only negative point is that it hasn't been updated for 10 months so basing your tag system on an external tool that isn't necessarily up to date can cause problems in the long run. But it is certainly an interesting source.\r\n\r\nFinally, I have added some information on the number of people speaking\/reading the language(s) searched (figures provided by langcodes which are based on those given by ISO). This is not relevant for our topic but it could be figures that could be added as information on the https:\/\/huggingface.co\/languages page. \r\n\r\n\r\n\r\nWhat could be done to improve the app if I have time:\r\n- Write the text for the app's homepage to describe what it does. This could serve as a basis for a documentation that I think will be necessary to add somewhere on the HF website to explain how the language tagging system works.\r\n- Deal with the bug mentioned above\r\n- Integrate ISO 3166-1 alpha 2 territories (https:\/\/www.iso.org\/obp\/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"d\u00e9partements\" for example).\r\n- Add autoglottonyms? (I only handle English language names for the moment)\r\n- For each language indicate to which family it belongs, in practice this could help to make data augmentation, but especially to classify the languages and find them more easily on the page https:\/\/huggingface.co\/languages.","Very impressive! Using the prompt 'Japhug' (a language name), the app finds the intended language:\r\n![image](https:\/\/user-images.githubusercontent.com\/6072524\/188441805-3af3a580-951e-4150-b5f9-64e1bde0992b.png)\r\n\r\nA first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: \r\n`sino1245\/burm1265\/naqi1236\/qian1263\/rgya1241\/core1262\/jiar1240` \r\nOne need not look further than the first higher-level grouping, [`jiar1240`](https:\/\/glottolog.org\/resource\/languoid\/id\/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n\r\nThus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus.\r\nIt might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.","> on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n\r\nThat is because the language name 'Aewa' is not found in the Ethnologue (ISO 639-3) export that you are using. [This export in table form](https:\/\/iso639-3.sil.org\/sites\/iso639-3\/files\/downloads\/iso-639-3.tab) only has one reference name (`Ref_Name`). For the language at issue, it is not 'Aewa' but ['Awishira'](https:\/\/www.ethnologue.com\/language\/ash).\r\n\r\nBy contrast, the language on line 0 of the database is called 'Abinomn' by both Ethnologue and Glottolog, and accordingly, columns `ISO639P3code` and `639-3` both contain the ISO 639-3 code, `bsa`.\r\n \r\nThe full Ethnologue database records alternate names for each language, and I'd bet that 'Aewa' is recorded among alternate names for the 'Ashiwira' language. I can't check because the full Ethnologue database is paywalled. \r\n![image](https:\/\/user-images.githubusercontent.com\/6072524\/188461409-e8c48036-df9b-4b56-9609-41cb9c3d3c3c.png)\r\n\r\n[Glottolog](https:\/\/glottolog.org\/resource\/languoid\/id\/abis1238) does provide the corresponding ISO 639-3 code for 'Aewa', `ash`, which is an exact match (it refers to the same variety as Glottolog `abis1238`).\r\nIn this specific case, Glottolog provides all the relevant information. I'd say that Glottolog can be trusted for all the codes they provide, including ISO 639-3 codes: they only include them when the match is good. \r\n\r\nSee previous comment about the cases where there is no exact match between Glottolog and ISO 639-3 (suggested workaround: look at a higher-level grouping to get an ISO 639-3 code).","I will add these two points to my TODO list.\r\n- Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https:\/\/iso639-3.sil.org\/sites\/iso639-3\/files\/downloads\/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n- For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of `Japhug` , should it be just `jya`, or `jya-japh1234` or `jya-Japhug`?","> * Integrate ISO 3166-1 alpha 2 territories (https:\/\/www.iso.org\/obp\/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"d\u00e9partements\" for example).\r\n\r\nI'm concerned with this sort of exploration. Not because I am against innovation. In fact this is an interesting thought exercise. However, to explore this thought further creates cognitive dissidence between BCP-47 authorized codes and other code sets which are not BP-47 compliant. For that reason, I think adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging. ","Good job for the application!\r\n\r\n> On the Hub, there is the following dataset where French people speak in English: https:\/\/huggingface.co\/datasets\/Datatang\/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n> Yes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields.\r\n\r\nTo briefly complete what I said on this subject in a private discussion group, there is a lot of (meta)data associated with each element of a corpus (which language level, according to which criteria, knowing that even among native speakers there are differences, some of which may go beyond what seems obvious to us from a linguistic point of view, such as socio-professional category, life history, environment in the broad sense, etc.), which can be placed in ad-hoc columns, or more freely in a comment\/note column. And it is the role of the researcher (in this case a linguist, in all likelihood) to do analyses (statistics...) to determine the relevant data, including criteria that may justify separating different languages (in the broad sense), making separate corpora, etc. Putting this information in the language code is in my opinion doing the job in the opposite and wrong direction, as well as bringing other problems, like where to stop in the list of multidimensional criteria to be integrated, so in my opinion, here, the minimum is the best (the important thing is in my opinion to have well-documented data, globally, by sub-corpus or by line)...\r\n\r\n> If you are going to use Glottolog codes use them after an -x- tag in the BCP-47 format to maintain BCP-47 validity.\r\n\r\nYes, for the current corpora, I have written:\r\n\r\n```\r\nlanguage:\r\n- jya\r\n- nru\r\nlanguage_bcp47:\r\n- x-japh1234\r\n- x-yong1288\r\n```\r\n\r\n> * Add autoglottonyms? (I only handle English language names for the moment)\r\n\r\nAutoglossonyms are useful (I use them prior to other glossonyms), but I'm not sure there is an easy way to retrieve them. We can find some of them in the \"Alternative Names\" panel of Glottolog, but even if we have an API to retrieve them easily, their associated language code will often not be the one we are in (hence the need to do several cycles to find one, which might not be the right one...). Maybe this problem needs more investigation...\r\n\r\n> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\nI strongly insist not to add **a** language name after the code, it would restart a spiral of problems, notably the choice of the language in question:\r\n* the autoglossonym: in my opinion the best choice, but you have to know it\u2026\r\n* the English name: iniquitous,\r\n* the name in the administratively\/politically dominant language of the target language if it is relevant (strictly localized without overlapping, for example): iniquitous and tendentious (and in a way a special case of the previous one)...\r\n* etc.\r\n","> To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https:\/\/huggingface.co\/spaces\/lbourdois\/Language-tags-demo.\r\n> The code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up.\r\n\r\nThis is really great. You're doing a fantastic job. I love watching the creative process evolve. It is exciting. Let me provide some links to some search interfaces for further inspiration. I always find it helpful to know how others have approached a problem when figuring out my approach. I will link to three examples Glottolog, r12a's language sub-tag chooser, and the FLEx project builder wizard. The first two are online, but the last one is in an application which must be downloaded and works only on windows or linux. I have placed some notes on each of the screenshots.\r\n\r\n* **[Glottolog](https:\/\/glottolog.org\/)** | [Search Query](https:\/\/glottolog.org\/glottolog?name=en&namequerytype=part&multilingual=on#2\/20.9\/150.0) \r\n\r\n![Glottolog1](https:\/\/user-images.githubusercontent.com\/40230\/188494425-84ee6ecf-6868-4684-a4ae-008973f3b367.png)\r\n![Glottolog2](https:\/\/user-images.githubusercontent.com\/40230\/188494426-fc1c225c-f99a-46b5-a1aa-950cf7912ce3.png)\r\n\r\n\r\n* **[r12a language sub-tag chooser](https:\/\/r12a.github.io\/app-subtags\/)** | [Code on github](https:\/\/github.com\/r12a\/app-subtags)\r\n\r\n![r12a1](https:\/\/user-images.githubusercontent.com\/40230\/188495349-8e53be68-8433-46ff-a0c7-c2f6e25458b6.png)\r\n\r\n\r\n* **FLEx Language Chooser** | [application page](https:\/\/software.sil.org\/fieldworks\/)\r\n![FLEx1](https:\/\/user-images.githubusercontent.com\/40230\/188499742-82c5601e-7e37-4863-bd63-8bff8c0694e3.png)\r\n\r\n","> In practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https:\/\/cldr.unicode.org\/translation\/displaynames\/languagelocale-name-patterns for the pattern that came up.\r\n\r\nWhat you are doing is looking at the algorithm for Locale generation rather than BCP-47's original documentation. I'm not sure there are difference, there might be. I know that locale IDs generally follow BCP-47 But I think there are some differences such as the use of `_` vs. `-`. ","> A first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: `sino1245\/burm1265\/naqi1236\/qian1263\/rgya1241\/core1262\/jiar1240` One need not look further than the first higher-level grouping, [`jiar1240`](https:\/\/glottolog.org\/resource\/languoid\/id\/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n> \r\n> Thus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus. It might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.\r\n\r\nThis is logical, but the fine grained assertions are not the same. That is just because they are in a hierarchical structure today doesn't mean they will be tomorrow. In some cases the glottolog is clearly referring to sub-language variants which will never receive full language status, whereas in other cases glottolog is referencing to unequal entities one or more of which should be a language. Many of the codes in glottolog have no associated documentation indicating what sort of speech variety they are. ","@lbourdois \r\n> * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https:\/\/iso639-3.sil.org\/sites\/iso639-3\/files\/downloads\/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n\r\nI'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?","> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\n(answer edited in view of [Benjamin Galliot's comment](https:\/\/github.com\/huggingface\/datasets\/issues\/4881#issuecomment-1237420600) \r\nEasy part of the answer first: jya-Japhug is out, because, as @BenjaminGalliot pointed out above, mixing language names with language codes will make trouble. For Japhug, `jya-Japhug` looks rather good: the pair looks nice, the one (`jya`) packed together, the other (`Japhug`) good and complete while still pretty compact. But think about languages like 'Yongning Na' or 'Yucat\u00e1n Maya': a code with a space in the middle, like `nru-Yongning Na`, is really unsightly and unwieldy, not?\r\n\r\nSome [principles for language naming in English](http:\/\/hdl.handle.net\/10125\/24725) have been put forward, with some linguistic arguments, but always supposing that such standardization is desirable, actual standardization of language names in English may well never happen.\r\n\r\nAs for `jya-japh1234`: again, at first sight it seems cute, combining two fierce competitors (Ethnologue and Glottolog) into something that gets the best of both worlds. \r\nBut @HughP has a point: _adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging_ Strong wording, for an important comment: better stick with BCP 47. \r\n\r\nSo the solution pointed out by Benjamin, from Frances Gillis-Webber and Sabine Tittel, looks attractive: \r\njya-x-japh1234\r\n\r\nOn the other hand, if the idea for HF Datasets is simply to add the closest ISO 639-3 code for a Glottolog code, maybe it could be provided simply in three letters: providing the 'raw' ISO 639-3 code `jya`. Availability of 'straight' ISO 639-3 codes could save trouble for some users, and those who want more detail could look at the rest of the metadata and general information associated with datasets.","The problem seems to have already been raised here: https:\/\/drops.dagstuhl.de\/opus\/volltexte\/2019\/10368\/pdf\/OASIcs-LDK-2019-4.pdf\r\n\r\nAn example can be seen here :\r\n\r\n> 3.1.2 The use of privateuse sub-tag\r\nIn light of unambiguous language codes being available for the two Khoisan varieties, we\r\npropose to combine the ISO 639-3 code for the parent language N\u2016ng, i.e., \u2018ngh\u2019, with the\r\nprivateuse sub-tag \u2018x-\u2019 and the respective Glottocodes stated above.\r\nThe language tags for N|uu and \u2016\u2019Au can then be defined accordingly:\r\nN|uu: ngh-x-nuuu1242\r\n\u2016\u2019Au: ngh-x-auni1243\r\n\r\nBy the way, while searching for this, I came across this application: https:\/\/huggingface.co\/spaces\/cdleong\/langcode-search","> > * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https:\/\/iso639-3.sil.org\/sites\/iso639-3\/files\/downloads\/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n> \r\n> I'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?\r\n\r\nHi @HughP, I'm happy to clear what confusion may exist here :innocent: Here is the use case. \r\nGuillaume Jacques (@rgyalrong) put together a sizeable corpus of the Japhug language. It is up on HF Datasets ([here](https:\/\/huggingface.co\/datasets\/Lacito\/pangloss\/viewer\/japh1234)) as well as on Zenodo. \r\n\r\nZenodo is an all-purpose repository without adequate domain-specific metadata (\"[m\u00e9tadonn\u00e9es m\u00e9tier](https:\/\/www.cines.fr\/archivage\/des-expertises\/les-metadonnees\/metadonnees-metier\/)\"), and the deposits in there are not easy to locate. The Zenodo deposit is intended for a highly specific user case: someone reads about the dataset in a paper, goes to the address on Zenodo and grabs the dataset at one go. \r\n\r\nHF Datasets, on the other hand, allows users to look around among corpora. The Japhug corpus needs proper tagging so that HF Datasets users can find out about it. \r\nJaphug has an entry of its own in Glottolog, whereas it lacks an entry of its own in Ethnologue. Hence the practical usefulness of Glottolog. Ethnologue pools together, under the code `jya`, three different languages (Japhug, Tshobdun `tsho1240` and Zbu `zbua1234`). \r\n\r\nI hope that this helps.","> By the way, while searching for this, I came across this application: https:\/\/huggingface.co\/spaces\/cdleong\/langcode-search\r\n\r\nReally relevant Space, so tagging its author @cdleong, just in case!","@cdleong A one-stop shop for language codes: terrific!\r\nHow do you feel about the use of Glottocodes? When searching the language names 'Japhug' and 'Yongning Na' (real examples, related to a HF Datasets deposit & various research projects), the relevant Glottocodes are retrieved, and that is great (and not that easy, notably with the space in the middle of 'Yongning Na'). But this positive result is 'hidden' in the results page. Specifically: \r\n\r\n- for Japhug: when searching by language name ('Japhug'), the result in big print is 'Failure', even though there is an available Glottocode (at bottom).\r\n![image](https:\/\/user-images.githubusercontent.com\/6072524\/188604619-a5032f53-6d2c-4751-b83b-bf70a5bf3b22.png)\r\nWhen searching by Glottocode (japh1234), same outcome: 'Result: failure!' (even though this _is_ the right Glottocode\r\nWhen searching by x-japh1234 (Glottocode encapsulated in BCP 47 syntax), one gets the message \r\n\r\n> ''x-japh1234' parses meaningfully as a language tag according to IANA\"\r\n\r\nbut there is paradoxically no link provided to Glottolog: the 'Glottolog' part of the results page is empty\r\n![image](https:\/\/user-images.githubusercontent.com\/6072524\/188605698-91a39982-ae70-4c48-94ae-cceeb06c25f5.png)\r\n\r\n- Yongning Na: the correct code is identified (yong1288) but instead of foregrounding this exact match, the first result that comes up is a completely different language, called 'Yong'. \r\n\r\nTrying to formulate a conclusion (admittedly, this note is not based on intensive testing, it is just feedback on initial contact): from a user perspective, it seems that the tool could make more extensive use of Glottolog. `langcode-search` does a great job querying Glottolog, why not make more extensive use of that information? (including: to arrive at the nearest ISO 639-3 code)"],"created_at":1661285664000,"updated_at":1663140750000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"**The problem:** \r\nLanguage diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.\r\n\r\nCurrently the list of language codes is [here](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/utils\/resources\/languages.json), right? At about 1,500 entries, it is roughly at 1\/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.)\r\n\r\nLooking forward to ever increasing coverage, how will the list of language names and language codes improve over time?\r\nEnrichment of the custom list by HFT contributors (like [here](https:\/\/github.com\/huggingface\/datasets\/pull\/4880)) has several issues: \r\n* progress is likely to be slow:\r\n![image](https:\/\/user-images.githubusercontent.com\/6072524\/186253353-62f42168-3d31-4105-be1c-5eb1f818d528.png)\r\n(input required from reviewers, etc.)\r\n* the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate.\r\n* there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives.\r\n\r\n**A solution that seems desirable:**\r\nConnecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc. \r\nIt takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https:\/\/www.ethnologue.com\/) (ISO standard) and [Glottolog](https:\/\/glottolog.org\/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes. \r\n\r\nBoth seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https:\/\/github.com\/glottolog\/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https:\/\/github.com\/lyy1994\/ethnologue); I did not try it out).\r\n\r\nIn case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen.\r\nWith appreciation of HFT,","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4881\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4881\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4880","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4880\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4880\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4880\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4880","id":1348452776,"node_id":"PR_kwDODunzps49qyJr","number":4880,"title":"Added names of less-studied languages","user":{"login":"BenjaminGalliot","id":23100612,"node_id":"MDQ6VXNlcjIzMTAwNjEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23100612?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BenjaminGalliot","html_url":"https:\/\/github.com\/BenjaminGalliot","followers_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/followers","following_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/orgs","repos_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/repos","events_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BenjaminGalliot\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["OK, I removed Glottolog codes and only added ISO 639-3 ones. The former are for the moment in corpus card description, language details, and in subcorpora names.","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4880). All of your documentation changes will be reflected on that endpoint."],"created_at":1661283158000,"updated_at":1661345566000,"closed_at":1661345566000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Added names of less-studied languages (nru \u2013 Narua and jya \u2013 Japhug) for existing datasets.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4880\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4880\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4880","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4880","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4880.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4880.patch","merged_at":1661345566000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4879","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4879\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4879\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4879\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4879","id":1348346407,"node_id":"PR_kwDODunzps49qbOl","number":4879,"title":"Fix Citation Information section in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4879). All of your documentation changes will be reflected on that endpoint."],"created_at":1661278003000,"updated_at":1661314148000,"closed_at":1661314147000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix Citation Information section in dataset cards.\r\n\r\nThis PR partially fixes the Citation Information section in dataset cards. Subsequent PRs will follow to complete this task.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4879\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4879\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4879","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4879","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4879.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4879.patch","merged_at":1661314147000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4878","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4878\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4878\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4878\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4878","id":1348270141,"node_id":"I_kwDODunzps5QXPg9","number":4878,"title":"[not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file`","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892884,"node_id":"MDU6TGFiZWwxOTM1ODkyODg0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/help%20wanted","name":"help wanted","color":"008672","default":true,"description":"Extra attention is needed"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Resolved via https:\/\/github.com\/huggingface\/datasets\/pull\/4937."],"created_at":1661274595000,"updated_at":1663077606000,"closed_at":1663077605000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)\r\n\r\nSee\r\n\r\nhttps:\/\/github.com\/huggingface\/huggingface_hub\/blob\/43499582b19df1ed081a5b2bd7a364e9cacdc91d\/src\/huggingface_hub\/hf_api.py#L2164-L2169\r\n\r\nIt's used here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fcfcc951a73efbc677f9def9a8707d0af93d5890\/src\/datasets\/dataset_dict.py#L1373-L1381\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fdcb8b144ce3ef241410281e125bd03e87b8caa1\/src\/datasets\/arrow_dataset.py#L4354-L4362\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fdcb8b144ce3ef241410281e125bd03e87b8caa1\/src\/datasets\/arrow_dataset.py#L4197-L4213\r\n\r\nWe should remove it.\r\n\r\nMaybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4878\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4878\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4877","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4877\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4877\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4877\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4877","id":1348246755,"node_id":"PR_kwDODunzps49qF-w","number":4877,"title":"Fix documentation card of covid_qa_castorini dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4877). All of your documentation changes will be reflected on that endpoint."],"created_at":1661273553000,"updated_at":1661277901000,"closed_at":1661277900000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix documentation card of covid_qa_castorini dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4877\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4877\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4877","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4877","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4877.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4877.patch","merged_at":1661277900000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4876","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4876\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4876\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4876\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4876","id":1348202678,"node_id":"I_kwDODunzps5QW_C2","number":4876,"title":"Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["also @osanseviero @Pierrci @SBrandeis potentially","Love this in principle \ud83d\ude80 \r\n\r\nLet's keep in mind users might rely on `dataset_infos.json` already.\r\n\r\nI'm not convinced by the two-syntax solution, wouldn't it be simpler to have only one syntax with a `default` config for datasets with only one config? ie, always having the `configs` field. This makes parsing the metadata easier IMO.\r\n\r\nMight also be good to wrap the tags under a `datasets_info` tag as follows:\r\n\r\n```yaml\r\ndescription: ...\r\ncitation: ...\r\ndataset_infos:\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n configs:\r\n - ...\r\n[...]\r\n```\r\n\r\nLet's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.","> Let's keep in mind users might rely on dataset_infos.json already.\r\n\r\nYea we'll full full backward compatibility\r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\nThe main things that may use or ingest these data IMO are:\r\n- users in the UI or IDE\r\n- `datasets` to populate `DatasetInfo` python object\r\n- moon landing which is already parsing YAML\r\n\r\nAm I missing something ? If not I think it's ok to use YAML\r\n\r\n> Might also be good to wrap the tags under a datasets_info tag as follows:\r\n\r\nMaybe one single syntax like this then ?\r\n```yaml\r\ndataset_infos:\r\n- config: unlabeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n- config: labeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 100\r\n features:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: ClassLabel\r\n names:\r\n - negative\r\n - positive\r\n```\r\nand when you have only one config\r\n```yaml\r\ndataset_infos:\r\n- config: default\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n```","love the idea, and the trend in general to move more things (like tasks) to a single place (YAML).\r\n\r\nalso, if you browse files on a dataset's page (in \"Files and versions\"), raw `README.md` files looks nice and readable, while `.json` files are just one long line that users need to scroll. \r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\ndo users often parse `datasets_infos.json` file themselves? ","> do users often parse datasets_infos.json file themselves?\r\n\r\nNot AFAIK, but I'm sure there should be a few users.\r\nUsers that access these info via the `DatasetInfo` from `datasets` won't see the change though e.g.\r\n```python\r\n>> from datasets import get_datasets_infos\r\n>>> get_datasets_infos(\"squad\")\r\n{'plain_text': DatasetInfo(description='Stanford Question Answering Dataset...\r\n```","> Maybe one single syntax like this then ?\r\n\r\nLGTM!\r\n\r\n> The main things that may use or ingest these data IMO are:\r\n> - users in the UI or IDE\r\n> - datasets to populate DatasetInfo python object\r\n> - moon landing which is already parsing YAML\r\n\r\nFair point!\r\n\r\nHaving dataset info in the README's YAML is great for API \/ `huggingface_hub` consumers as well as it will be inserted in the `cardData` field out of the box \ud83d\udd25 \r\n","Very supportive of this!\r\n\r\nNesting an array of configs inside `dataset_infos: ` sounds good to me. One small tweak is that `config: default` can be optional for the default config (which can be the first one by convention)\r\n\r\nWe'll be able to implement metadata validation on the Hub side so we ensure that those metadata are always in the right format (maybe for @coyotte508 ? cc @Pierrci). From a quick glance the `features` might be the harder part to validate here, any doc will be welcome.\r\n\r\n### Other high-level points:\r\n- as we move from mostly academic datasets to *all* datasets (which include the data inside the repos), my intuition is that more and more datasets (Hub-stored) are going to be **single-config**\r\n- similarly, less and less datasets will have a loading script, **just the data + some metadata**\r\n- to lower the barrier to entry to contribution, in the long term users shouldn't need to compute\/update this data via a command line. It could be filled automatically on the Hub through a \"bot\" inside Discussions & Pull requests for instance.","re: `config: default`\r\n\r\nNote also that the default config is not named `default`, afaiu, but create from the repo name, eg: https:\/\/huggingface.co\/datasets\/nbtpj\/bionlp2021SAS default config is `nbtpj--bionlp2021SAS` (which is awful)","> Note also that the default config is not named default, afaiu, but create from the repo name, eg: https:\/\/huggingface.co\/datasets\/nbtpj\/bionlp2021SAS default config is nbtpj--bionlp2021SAS (which is awful)\r\n\r\nWe can change this to `default` I think or something else","> From a quick glance the features might be the harder part to validate here, any doc will be welcome.\r\n\r\nI dug into features validation, see:\r\n\r\n- the OpenAPI spec: https:\/\/github.com\/huggingface\/datasets-server\/blob\/main\/chart\/static-files\/openapi.json#L460-L697\r\n- the node.js code: https:\/\/github.com\/huggingface\/moon-landing\/blob\/upgrade-datasets-server-client\/server\/lib\/datasets\/FeatureType.ts","> We can change this to default I think or something else\r\n\r\nI created https:\/\/github.com\/huggingface\/datasets\/issues\/4902 to discuss that","> Note also that the default config is not named `default`, afaiu, but create from the repo name\r\n\r\nin case of single-config you can even hide the config name from the UI IMO\r\n\r\n> I dug into features validation, see: the OpenAPI spec\r\n\r\nin moon-landing we use [Joi](https:\/\/joi.dev\/api\/) to validate metadata so we would need to generate from Joi code from the OpenAPI spec (or from somewhere else) but I guess that's doable \u2013 or just rewrite it manually, as it won't change often","I remember there was an ongoing discussion on this topic:\r\n- #3507\r\n\r\nI recall some of the concerns raised on that discussion:\r\n- @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: [#3507 (comment)](https:\/\/github.com\/huggingface\/datasets\/issues\/3507#issuecomment-1056997627)\r\n- @severo: [#3507 (comment)](https:\/\/github.com\/huggingface\/datasets\/issues\/3507#issuecomment-1042779776)\r\n - the metadata header might be very long, before reaching the start of the README\/dataset card. \r\n - It also somewhat prevents including large strings like the checksums\r\n - two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file. \r\n- @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: [#3507 (comment)](https:\/\/github.com\/huggingface\/datasets\/issues\/3507#issuecomment-1033752157)","Thanks for bringing these points up !\r\n\r\n> @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: https:\/\/github.com\/huggingface\/datasets\/issues\/3507#issuecomment-1056997627\r\n\r\nThe TFDS implementation is not super advanced, so it's ok IMO as long as we don't break all the dataset scripts. Note that users can still use `to_tf_dataset`.\r\n\r\nWe had a chance to discuss the two nexts points with @julien-c as well:\r\n\r\n> @severo: https:\/\/github.com\/huggingface\/datasets\/issues\/3507#issuecomment-1042779776\r\nthe metadata header might be very long, before reaching the start of the README\/dataset card.\r\n\r\nIf we don't add the checksums we should be fine. We can also set a maximum number of supported configs in the README to keep it readable.\r\n\r\n> @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: https:\/\/github.com\/huggingface\/datasets\/issues\/3507#issuecomment-1033752157\r\n\r\nI guess the \"HF Hub actions\" could open PRs to do the same in the YAML directly\r\n","Thanks for linking that similar discussion for context, @albertvillanova!"],"created_at":1661271401000,"updated_at":1661782709000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently there are two places to find metadata for datasets:\r\n- datasets_infos.json, which contains **per dataset config**\r\n - description\r\n - citation\r\n - license\r\n - splits and sizes\r\n - checksums of the data files\r\n - feature types\r\n - and more\r\n- YAML tags, which contain\r\n - license\r\n - language\r\n - train-eval-index\r\n - and more\r\n\r\nIt would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have. \r\n\r\nOne way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description\/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant.\r\n\r\nHere is an example for SQuAD\r\n```yaml\r\n\r\ndownload_size: 35142551\r\ndataset_size: 89789763\r\nversion: 1.0.0\r\nsplits:\r\n- name: train\r\n num_examples: 87599\r\n num_bytes: 79317110\r\n- name: validation\r\n num_examples: 10570\r\n num_bytes: 10472653\r\nfeatures:\r\n- name: id\r\n dtype: string\r\n- name: title\r\n dtype: string\r\n- name: context\r\n dtype: string\r\n- name: question\r\n dtype: string\r\n- name: answers\r\n struct:\r\n - name: text\r\n list:\r\n dtype: string\r\n - name: answer_start\r\n list:\r\n dtype: int32\r\n```\r\n\r\nSince there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax\r\n```yaml\r\nconfigs:\r\n- config: unlabeled\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n- config: labeled\r\n splits:\r\n - name: train\r\n num_examples: 100\r\n features:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: ClassLabel\r\n names:\r\n - negative\r\n - positive\r\n```\r\nSo in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field\r\n\r\nAlternatively we could keep config specific stuff in the `dataset_infos.json` as it it today\r\n\r\nNot sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4876\/reactions","total_count":7,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":3,"rocket":0,"eyes":4},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4876\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4875","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4875\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4875\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4875\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4875","id":1348095686,"node_id":"I_kwDODunzps5QWk7G","number":4875,"title":"`_resolve_features` ignores the token","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Your HF_ENDPOINT seems wrong because of the extra \"\/\"\r\n```diff\r\n- os.environ[\"HF_ENDPOINT\"] = \"https:\/\/hub-ci.huggingface.co\/\"\r\n+ os.environ[\"HF_ENDPOINT\"] = \"https:\/\/hub-ci.huggingface.co\"\r\n```\r\n\r\ncan you try again without the extra \"\/\" ?","Oh, yes, sorry, but it's not the issue.\r\n\r\nIn my code, I set `HF_ENDPOINT=https:\/\/hub-ci.huggingface.co`. I added `os.environ[\"HF_ENDPOINT\"] = \"https:\/\/hub-ci.huggingface.co\/\"` afterward just to indicate that we had to have this env var and made a mistake there","I can't reproduce on my side. I tried using a private dataset repo with a CSV file on hub-ci\r\n\r\nWhat's your version of `huggingface_hub` ?","I can't reproduce either... Not sure what has occurred, very sorry to have made you lost your time on that "],"created_at":1661266656000,"updated_at":1661358821000,"closed_at":1661358810000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nWhen calling [`_resolve_features()`](https:\/\/github.com\/huggingface\/datasets\/blob\/54b532a8a2f5353fdb0207578162153f7b2da2ec\/src\/datasets\/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `load_dataset` before.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\nimport os\r\n\r\nos.environ[\"HF_ENDPOINT\"] = \"https:\/\/hub-ci.huggingface.co\/\"\r\nhf_token = \"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\"\r\n\r\nfrom datasets import load_dataset\r\n\r\n# public\r\ndataset_name = \"__DUMMY_DATASETS_SERVER_USER__\/repo_csv_data-16612654226756\"\r\nconfig_name = \"__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654226756\"\r\nsplit_name = \"train\"\r\n\r\niterable_dataset = load_dataset(\r\n dataset_name,\r\n name=config_name,\r\n split=split_name,\r\n streaming=True,\r\n use_auth_token=hf_token,\r\n)\r\niterable_dataset = iterable_dataset._resolve_features()\r\nprint(iterable_dataset.features)\r\n\r\n# gated\r\ndataset_name = \"__DUMMY_DATASETS_SERVER_USER__\/repo_csv_data-16612654317644\"\r\nconfig_name = \"__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654317644\"\r\nsplit_name = \"train\"\r\n\r\n\r\niterable_dataset = load_dataset(\r\n dataset_name,\r\n name=config_name,\r\n split=split_name,\r\n streaming=True,\r\n use_auth_token=hf_token,\r\n)\r\ntry:\r\n iterable_dataset = iterable_dataset._resolve_features()\r\nexcept FileNotFoundError as e:\r\n print(\"FAILS\")\r\n```\r\n\r\n## Expected results\r\n\r\nI expect to have the same result on a public dataset and on a gated (or private) dataset, if the token has been provided.\r\n\r\n## Actual results\r\n\r\nAn exception is thrown on gated datasets.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.15.0-1017-aws-x86_64-with-glibc2.35\r\n- Python version: 3.9.6\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4875\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4875\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4874","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4874\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4874\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4874\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4874","id":1347618197,"node_id":"PR_kwDODunzps49n_nI","number":4874,"title":"[docs] Some tiny doc tweaks","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4874). All of your documentation changes will be reflected on that endpoint."],"created_at":1661246380000,"updated_at":1661362077000,"closed_at":1661362076000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4874\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4874\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4874","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4874","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4874.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4874.patch","merged_at":1661362076000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4873","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4873\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4873\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4873\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4873","id":1347592022,"node_id":"I_kwDODunzps5QUp9W","number":4873,"title":"Multiple dataloader memory error","user":{"login":"cyk1337","id":13767887,"node_id":"MDQ6VXNlcjEzNzY3ODg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13767887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cyk1337","html_url":"https:\/\/github.com\/cyk1337","followers_url":"https:\/\/api.github.com\/users\/cyk1337\/followers","following_url":"https:\/\/api.github.com\/users\/cyk1337\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cyk1337\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cyk1337\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cyk1337\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cyk1337\/orgs","repos_url":"https:\/\/api.github.com\/users\/cyk1337\/repos","events_url":"https:\/\/api.github.com\/users\/cyk1337\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cyk1337\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi!\r\n\r\n200+ data loaders is a lot. Have you tried to reduce the number of datasets by concatenating\/interleaving the ones with the same structure\/task (the API is `{concatenate_datasets\/interleave_datasets}([dset1, ..., dset_N])`)?","Hi @mariosasko, thank you for your reply. I tried pre-concatenating different datasets into one, but one key need is to keep each batch the same data type. Considering that the concatenate-then-segment operation for prefetched samples may span across different data types after concatenating\/interleaving (cuz different data sources are mixed), any solution to remain the same data source for each batch?"],"created_at":1661245190000,"updated_at":1662692577000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)`\r\nIt causes the memory error when generating batches. Any solutions to it?\r\n\r\n```bash\r\n File \"\/home\/xxx\/my_code\/src\/utils\/data_utils.py\", line 54, in generate_batch\r\n x = next(iterator)\r\n File \"\/home\/xxx\/anaconda3\/envs\/pt1.7\/lib\/python3.7\/site-packages\/accelerate\/data_loader.py\", line 301, in __iter__\r\n for batch in super().__iter__():\r\n File \"\/home\/xxx\/anaconda3\/envs\/pt1.7\/lib\/python3.7\/site-packages\/torch\/utils\/data\/dataloader.py\", line 435, in __next__\r\n data = self._next_data()\r\n File \"\/home\/xxx\/anaconda3\/envs\/pt1.7\/lib\/python3.7\/site-packages\/torch\/utils\/data\/dataloader.py\", line 475, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"\/home\/xxx\/anaconda3\/envs\/pt1.7\/lib\/python3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 28, in fetch\r\n data.append(next(self.dataset_iter))\r\n File \"\/home\/xxx\/anaconda3\/envs\/pt1.7\/lib\/python3.7\/site-packages\/accelerate\/data_loader.py\", line 249, in __iter__\r\n for element in self.dataset:\r\n File \"\/home\/xxx\/anaconda3\/envs\/pt1.7\/lib\/python3.7\/site-packages\/datasets\/iterable_dataset.py\", line 503, in __iter__\r\n for key, example in self._iter():\r\n File \"\/home\/xxx\/anaconda3\/envs\/pt1.7\/lib\/python3.7\/site-packages\/datasets\/iterable_dataset.py\", line 500, in _iter\r\n yield from ex_iterable\r\n File \"\/home\/xxx\/anaconda3\/envs\/pt1.7\/lib\/python3.7\/site-packages\/datasets\/iterable_dataset.py\", line 231, in __iter__\r\n new_key = \"_\".join(str(key) for key in keys)\r\nMemoryError\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4873\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4873\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4872","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4872\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4872\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4872\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4872","id":1347180765,"node_id":"PR_kwDODunzps49mjU9","number":4872,"title":"[WIP] Docs for creating an audio dataset","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4872). All of your documentation changes will be reflected on that endpoint.","Awesome thanks ! I think we can also encourage TAR archives as for image dataset scripts (feel free to copy paste some parts from there lol)","Thanks for all the great feedback @polinaeterna and @lhoestq! \ud83e\udd70\r\n\r\nI added all the other feedback, and I'll look into the `librivox-indonesia` script now!"],"created_at":1661216829000,"updated_at":1662925914000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. \ud83d\ude42","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4872\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4872\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4872","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4872","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4872.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4872.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4871","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4871\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4871\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4871\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4871","id":1346703568,"node_id":"PR_kwDODunzps49k9Rm","number":4871,"title":"Fix: wmt datasets - fix CWMT zh subsets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4871). All of your documentation changes will be reflected on that endpoint."],"created_at":1661186529000,"updated_at":1661248820000,"closed_at":1661248819000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix https:\/\/github.com\/huggingface\/datasets\/issues\/4575\r\n\r\nTODO: run `datasets-cli test`:\r\n- [x] wmt17\r\n- [x] wmt18\r\n- [x] wmt19","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4871\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4871\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4871","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4871","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4871.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4871.patch","merged_at":1661248819000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4870","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4870\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4870\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4870\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4870","id":1346160498,"node_id":"PR_kwDODunzps49jGxD","number":4870,"title":"audio folder check CI","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661163353000,"updated_at":1661171672000,"closed_at":1661170780000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4870\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4870\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4870","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4870","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4870.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4870.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4869","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4869\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4869\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4869\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4869","id":1345513758,"node_id":"PR_kwDODunzps49hBGY","number":4869,"title":"Fix typos in documentation","user":{"login":"fl-lo","id":85993954,"node_id":"MDQ6VXNlcjg1OTkzOTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/85993954?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fl-lo","html_url":"https:\/\/github.com\/fl-lo","followers_url":"https:\/\/api.github.com\/users\/fl-lo\/followers","following_url":"https:\/\/api.github.com\/users\/fl-lo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fl-lo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fl-lo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fl-lo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fl-lo\/orgs","repos_url":"https:\/\/api.github.com\/users\/fl-lo\/repos","events_url":"https:\/\/api.github.com\/users\/fl-lo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fl-lo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1661094603000,"updated_at":1661160339000,"closed_at":1661159398000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4869\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4869\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4869","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4869","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4869.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4869.patch","merged_at":1661159398000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4868","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4868\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4868\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4868\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4868","id":1345191322,"node_id":"PR_kwDODunzps49gBk0","number":4868,"title":"adding mafand to datasets","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892913,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEz","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/wontfix","name":"wontfix","color":"ffffff","default":true,"description":"This will not be worked on"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hi @dadelani, thanks for your awesome contribution!!! :heart: \r\n\r\nHowever, now we are using the Hub to add new datasets, instead of this GitHub repo. \r\n\r\nYou could share this dataset under your Hub organization namespace: [Masakhane NLP](https:\/\/huggingface.co\/masakhane). This way the dataset will be accessible using:\r\n```python\r\nds = load_dataset(\"masakhane\/mafand\")\r\n```\r\n\r\nYou have the procedure documented in our online docs: \r\n- [Create a dataset loading script](https:\/\/huggingface.co\/docs\/datasets\/dataset_script)\r\n- [Share](https:\/\/huggingface.co\/docs\/datasets\/share)\r\n\r\nMoreover, datasets shared on the Hub no longer need the dummy data files.\r\n\r\nPlease, feel free to ping me if you need any further guidance\/support.","thank you for the comment. I have moved it to the Hub https:\/\/huggingface.co\/datasets\/masakhane\/mafand","Great job, @dadelani!!\r\n\r\nPlease, note that in the README.md file, the YAML tags should be preceded and followed by three dashes `---`, so that they are properly parsed. See, e.g.: https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/main\/templates\/README.md","Also you could replace the line:\r\n```\r\n# Dataset Card for [Needs More Information]\r\n```\r\nwith\r\n```\r\n# Dataset Card for MAFAND-MT\r\n```","Great, thank you for the feedback. I have fixed both issues."],"created_at":1661009174000,"updated_at":1661166050000,"closed_at":1661158343000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I'm addding the MAFAND dataset by Masakhane based on the paper\/repository below:\r\n\r\nPaper: https:\/\/aclanthology.org\/2022.naacl-main.223\/\r\nCode: https:\/\/github.com\/masakhane-io\/lafand-mt\r\n\r\nPlease, help merge this\r\n\r\nEverything works except for creating dummy data file","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4868\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4868\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4868","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4868","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4868.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4868.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4867","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4867\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4867\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4867\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4867","id":1344982646,"node_id":"PR_kwDODunzps49fZle","number":4867,"title":"Complete tags of superglue dataset card","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660952679000,"updated_at":1661159643000,"closed_at":1661158711000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Related to #4479 .","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4867\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4867\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4867","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4867","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4867.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4867.patch","merged_at":1661158711000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4866","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4866\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4866\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4866\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4866","id":1344809132,"node_id":"PR_kwDODunzps49e1CP","number":4866,"title":"amend docstring for dunder","user":{"login":"schafsam","id":37704298,"node_id":"MDQ6VXNlcjM3NzA0Mjk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37704298?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/schafsam","html_url":"https:\/\/github.com\/schafsam","followers_url":"https:\/\/api.github.com\/users\/schafsam\/followers","following_url":"https:\/\/api.github.com\/users\/schafsam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/schafsam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/schafsam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/schafsam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/schafsam\/orgs","repos_url":"https:\/\/api.github.com\/users\/schafsam\/repos","events_url":"https:\/\/api.github.com\/users\/schafsam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/schafsam\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4866). All of your documentation changes will be reflected on that endpoint."],"created_at":1660936155000,"updated_at":1662741191000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"display dunder method in docsting with underlines an not bold markdown.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4866\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4866\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4866","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4866","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4866.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4866.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4865","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4865\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4865\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4865\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4865","id":1344552626,"node_id":"I_kwDODunzps5QJD6y","number":4865,"title":"Dataset Viewer issue for MoritzLaurer\/multilingual_nli","user":{"login":"MoritzLaurer","id":41862082,"node_id":"MDQ6VXNlcjQxODYyMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41862082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MoritzLaurer","html_url":"https:\/\/github.com\/MoritzLaurer","followers_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/followers","following_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/orgs","repos_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/repos","events_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MoritzLaurer\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @MoritzLaurer.\r\n\r\nCurrently, the dataset preview is working properly: https:\/\/huggingface.co\/datasets\/MoritzLaurer\/multilingual_nli\r\n\r\nPlease note that when a dataset is modified, it might take some time until the preview is completely updated.\r\n\r\n@severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?","Thanks for your response. You are right, its now working well. I had waited for 30 min or so and refreshed several times and thought there was some other error. Yeah, a different error message sounds like a good idea to avoid confusion. ","I'm closing this issue then.","> @severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?\r\n\r\nYes, it's a known issue, and we're about to ship a better version"],"created_at":1660920920000,"updated_at":1661179634000,"closed_at":1661148800000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\r\n\r\n_No response_\r\n\r\n### Description\r\n\r\nI've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https:\/\/huggingface.co\/datasets\/MoritzLaurer\/multilingual_nli\r\n\r\nIt displays the error: \r\n```\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The dataset does not exist.\r\n```\r\n\r\nWeirdly enough the dataviewer works for an earlier version of the same dataset. The only difference is that it is smaller, but I'm not aware of other changes I have made: https:\/\/huggingface.co\/datasets\/MoritzLaurer\/multilingual_nli_test\r\n\r\nDo you know why the dataviewer is not working?\r\n\r\n### Owner\r\n\r\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4865\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4865\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4864","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4864\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4864\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4864\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4864","id":1344410043,"node_id":"I_kwDODunzps5QIhG7","number":4864,"title":"Allow pathlib PoxisPath in Dataset.read_json","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1660913957000,"updated_at":1660913957000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n```\r\nfrom pathlib import Path\r\nfrom datasets import Dataset\r\nds = Dataset.read_json(Path('data.json'))\r\n```\r\ncauses an error\r\n```\r\nAttributeError: 'PosixPath' object has no attribute 'decode'\r\n```\r\n\r\n**Describe the solution you'd like**\r\n\r\nIt should be able to accept PosixPath and read the json from inside.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4864\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4864\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4863","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4863\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4863\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4863\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4863","id":1343737668,"node_id":"I_kwDODunzps5QF89E","number":4863,"title":"TFDS wiki_dialog dataset to Huggingface dataset","user":{"login":"djaym7","id":12378820,"node_id":"MDQ6VXNlcjEyMzc4ODIw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12378820?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/djaym7","html_url":"https:\/\/github.com\/djaym7","followers_url":"https:\/\/api.github.com\/users\/djaym7\/followers","following_url":"https:\/\/api.github.com\/users\/djaym7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/djaym7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/djaym7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/djaym7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/djaym7\/orgs","repos_url":"https:\/\/api.github.com\/users\/djaym7\/repos","events_url":"https:\/\/api.github.com\/users\/djaym7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/djaym7\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@albertvillanova any help ? The linked dataset is in beam format which is similar to wikipedia dataset in huggingface that you scripted..","Nvm, I was able to port it to huggingface datasets, will upload to the hub soon","https:\/\/huggingface.co\/datasets\/djaym7\/wiki_dialog","Thanks for the addition, @djaym7."],"created_at":1660863990000,"updated_at":1661161305000,"closed_at":1661145533000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Wiki_dialog*\r\n- **Description: https:\/\/github.com\/google-research\/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A\r\n- **Paper: https:\/\/arxiv.org\/abs\/2205.09073\r\n- **Data: https:\/\/github.com\/google-research\/dialog-inpainting\r\n- **Motivation:** *Research and Development on biggest corpus of dialog data*\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/ADD_NEW_DATASET.md).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4863\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4863\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4862","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4862\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4862\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4862\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4862","id":1343464699,"node_id":"I_kwDODunzps5QE6T7","number":4862,"title":"Got \"AttributeError: 'xPath' object has no attribute 'read'\" when loading an excel dataset with my own code","user":{"login":"yana-xuyan","id":38536635,"node_id":"MDQ6VXNlcjM4NTM2NjM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38536635?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yana-xuyan","html_url":"https:\/\/github.com\/yana-xuyan","followers_url":"https:\/\/api.github.com\/users\/yana-xuyan\/followers","following_url":"https:\/\/api.github.com\/users\/yana-xuyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yana-xuyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yana-xuyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yana-xuyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yana-xuyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/yana-xuyan\/repos","events_url":"https:\/\/api.github.com\/users\/yana-xuyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yana-xuyan\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["What's more, the downloaded data is actually a folder instead of an excel file.","Hi hi, instead of using `download_and_extract` function, I only use `download` function: `base_dir = Path(dl_manager.download(urls))`. It turns out that the code works for `datasets==2.2.2`, however, it doesn't work with `datasets==2.4.0`. ","Hi @yana-xuyan, thanks for reporting.\r\n\r\nIndeed you already found the answer: an Excel file should be just downloaded and not downloaded-and-extracted.\r\n\r\nThe reason why is that if you call also extract, our library will try to infer the compression format (and extract it). And Excel files are viewed as ZIP files and extracted as so (into a directory). This is because the Office Open XML is indeed a zipped file under the hood): https:\/\/en.wikipedia.org\/wiki\/Office_Open_XML\r\n> Office Open XML (also informally known as OOXML) is a **zipped**, XML-based file format\r\n```python\r\nimport zipfile\r\n\r\nzipfile.is_zipfile(\"filename.xlsx\")\r\n```\r\nreturns `True`.","Hi @albertvillanova, thank you for your reply! Do you have any clue on why the same error still exists with `datasets==2.4.0` even after I don't extract the downloaded file? FYI, if I downgrade to `datasets==2.2.2`, the code works fine.","I guess this has to do with the cache: you should remove the previously-wrongly generated directory from the cache; otherwise `datasets` tries to re-use it."],"created_at":1660847774000,"updated_at":1661937908000,"closed_at":1661937908000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\n# The dataset function is as follows\uff1a\r\nfrom pathlib import Path\r\nfrom typing import Dict, List, Tuple\r\n\r\nimport datasets\r\nimport pandas as pd\r\n\r\n_CITATION = \"\"\"\\\r\n\"\"\"\r\n\r\n_DATASETNAME = \"jadi_ide\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\n\"\"\"\r\n\r\n_HOMEPAGE = \"\"\r\n_LICENSE = \"Unknown\"\r\n_URLS = {\r\n _DATASETNAME: \"https:\/\/github.com\/fathanick\/Javanese-Dialect-Identification-from-Twitter-Data\/raw\/main\/Update 16K_Dataset.xlsx\",\r\n}\r\n_SOURCE_VERSION = \"1.0.0\"\r\n\r\n\r\nclass JaDi_Ide(datasets.GeneratorBasedBuilder):\r\n\r\n SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)\r\n\r\n BUILDER_CONFIGS = [\r\n NusantaraConfig(\r\n name=\"jadi_ide_source\",\r\n version=SOURCE_VERSION,\r\n description=\"JaDi-Ide source schema\",\r\n schema=\"source\",\r\n subset_id=\"jadi_ide\",\r\n ),\r\n ]\r\n\r\n DEFAULT_CONFIG_NAME = \"source\"\r\n\r\n def _info(self) -> datasets.DatasetInfo:\r\n if self.config.schema == \"source\":\r\n features = datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"), \r\n \"text\": datasets.Value(\"string\"), \r\n \"label\": datasets.Value(\"string\")\r\n }\r\n )\r\n\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=features,\r\n homepage=_HOMEPAGE,\r\n license=_LICENSE,\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n # Dataset does not have predetermined split, putting all as TRAIN\r\n urls = _URLS[_DATASETNAME]\r\n base_dir = Path(dl_manager.download_and_extract(urls))\r\n data_files = {\"train\": base_dir}\r\n\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"filepath\": data_files[\"train\"],\r\n \"split\": \"train\",\r\n },\r\n ),\r\n ]\r\n\r\n def _generate_examples(self, filepath: Path, split: str) -> Tuple[int, Dict]:\r\n \"\"\"Yields examples as (key, example) tuples.\"\"\"\r\n df = pd.read_excel(filepath, engine='openpyxl')\r\n df.columns = [\"id\", \"text\", \"label\"]\r\n\r\n if self.config.schema == \"source\":\r\n for row in df.itertuples():\r\n ex = {\r\n \"id\": str(row.id),\r\n \"text\": row.text,\r\n \"label\": row.label,\r\n }\r\n yield row.id, ex\r\n\r\n```\r\n\r\n## Expected results\r\nExpecting to load the dataset smoothly.\r\n\r\n## Actual results\r\n File \"\/home\/xuyan\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 1751, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/home\/xuyan\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 705, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/xuyan\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1227, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"\/home\/xuyan\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 793, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/xuyan\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1216, in _prepare_split\r\n desc=f\"Generating {split_info.name} split\",\r\n File \"\/home\/xuyan\/anaconda3\/lib\/python3.7\/site-packages\/tqdm\/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/xuyan\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/jadi_ide\/7a539f2b6f726defea8fbe36ceda17bae66c370f6d6c418e3a08d760ebef7519\/jadi_ide.py\", line 107, in _generate_examples\r\n df = pd.read_excel(filepath, engine='openpyxl')\r\n File \"\/home\/xuyan\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 701, in xpandas_read_excel\r\n return pd.read_excel(BytesIO(filepath_or_buffer.read()), **kwargs)\r\nAttributeError: 'xPath' object has no attribute 'read'\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid\r\n- Python version: 3.7.4\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 0.25.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4862\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4862\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4861","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4861\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4861\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4861\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4861","id":1343260220,"node_id":"I_kwDODunzps5QEIY8","number":4861,"title":"Using disk for memory with the method `from_dict`","user":{"login":"HugoLaurencon","id":44556846,"node_id":"MDQ6VXNlcjQ0NTU2ODQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44556846?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/HugoLaurencon","html_url":"https:\/\/github.com\/HugoLaurencon","followers_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/followers","following_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/orgs","repos_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/repos","events_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/HugoLaurencon\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1660835898000,"updated_at":1660835898000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nI start with an empty dataset. In a loop, at each iteration, I create a new dataset with the method `from_dict` (based on some data I load) and I concatenate this new dataset with the one at the previous iteration. After some iterations, I have an OOM error.\r\n\r\n**Describe the solution you'd like**\r\nThe method `from_dict` loads the data in RAM. It could be good to add an option to use the disk instead.\r\n\r\n**Describe alternatives you've considered**\r\nTo solve the problem, I have to do an intermediate step where I save the new datasets at each iteration with `save_to_disk`. Once it's done, I open them all and concatenate them.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4861\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4861\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4860","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4860\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4860\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4860\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4860","id":1342311540,"node_id":"PR_kwDODunzps49WjEu","number":4860,"title":"Add collection3 dataset","user":{"login":"pefimov","id":16446994,"node_id":"MDQ6VXNlcjE2NDQ2OTk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16446994?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pefimov","html_url":"https:\/\/github.com\/pefimov","followers_url":"https:\/\/api.github.com\/users\/pefimov\/followers","following_url":"https:\/\/api.github.com\/users\/pefimov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pefimov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pefimov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pefimov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pefimov\/orgs","repos_url":"https:\/\/api.github.com\/users\/pefimov\/repos","events_url":"https:\/\/api.github.com\/users\/pefimov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pefimov\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892913,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEz","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/wontfix","name":"wontfix","color":"ffffff","default":true,"description":"This will not be worked on"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @pefimov. Thanks for you awesome work on this dataset contribution.\r\n\r\nHowever, now we are using the Hub to add new datasets, instead of this GitHub repo. \r\n\r\nYou could share this dataset under the appropriate Hub organization namespace. This way the dataset will be accessible using:\r\n```python\r\nds = load_dataset(\"\/collection3\")\r\n```\r\n\r\nYou have the procedure documented in our online docs: \r\n- [Create a dataset loading script](https:\/\/huggingface.co\/docs\/datasets\/dataset_script)\r\n- [Share](https:\/\/huggingface.co\/docs\/datasets\/share)\r\n\r\nMoreover, datasets shared on the Hub no longer need the dummy data files.\r\n\r\nPlease, feel free to ping me if you need any further guidance\/support. ","> However, now we are using the Hub to add new datasets, instead of this GitHub repo.\r\n> \r\n> You could share this dataset under the appropriate Hub organization namespace. This way the dataset will be accessible using:\r\n> \r\n> ```python\r\n> ds = load_dataset(\"\/collection3\")\r\n> ```\r\n> \r\nHi @albertvillanova . Thank you for your response.\r\n\r\nI thought that Collection3 is large and important dataset in Russian presented in 2016 but not represented in huggingface.\r\n\r\nAlso I am not related to authors or organisation of dataset","The current policy of sharing datasets on the Hub instead of in this GitHub repo has no relation with the importance of the dataset: https:\/\/huggingface.co\/docs\/datasets\/share#datasets-on-github-legacy \r\n> The distinction between a Hub dataset and a dataset from GitHub only comes from the legacy sharing workflow. It does not involve any ranking, decisioning, or opinion regarding the contents of the dataset itself.\r\n\r\nIt is not required to be an author\/owner (or belong to the organization that is owner) of the dataset in order to share it on the Hub (as it was not the case when sharing them on this GitHub repo). \r\n\r\nIt is recommended to share it under an organization namespace that makes sense though. For this specific dataset, do you know of a clear organization under which it could be shared on the Hub? Maybe \"labinform\", or \"Information Research Laboratory\" or \"Lomonosov Moscow State University\"?\r\n\r\nIn cases like this, where the org is not evident, one possibility could be to contact the dataset owners\/creators and ask them. According the publication paper, the authors are:\r\n- V.A. Mozharova\r\n- N.V. Loukachevitch\r\n\r\nI think maybe it would be worth contacting them.","@pefimov I have contacted the authors (and put you in CC).","Reply from the authors:\r\n> It is better to use name: Research Computing Center of Lomonosov Moscow State University (short name RCC-MSU)\r\n> https:\/\/rcc.msu.ru\/en","I have created the corresponding org namespace and dataset empty repository: https:\/\/huggingface.co\/datasets\/RCC-MSU\/collection3\r\n\r\n@pefimov feel free to open a PR on the Hub if you are willing to do so: \r\n- Go to the *Community* tab on the repo: https:\/\/huggingface.co\/datasets\/RCC-MSU\/collection3\/discussions\r\n- And click: *New pull request* button\r\n\r\nDocs: [Pull requests and Discussions](https:\/\/huggingface.co\/docs\/hub\/repositories-pull-requests-discussions) on the Hub","Thanks"],"created_at":1660771902000,"updated_at":1661284965000,"closed_at":1661159339000,"author_association":"NONE","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4860\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4860\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4860","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4860","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4860.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4860.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4859","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4859\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4859\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4859\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4859","id":1342231016,"node_id":"I_kwDODunzps5QANHo","number":4859,"title":"can't install using conda on Windows 10","user":{"login":"xoffey","id":22627691,"node_id":"MDQ6VXNlcjIyNjI3Njkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22627691?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xoffey","html_url":"https:\/\/github.com\/xoffey","followers_url":"https:\/\/api.github.com\/users\/xoffey\/followers","following_url":"https:\/\/api.github.com\/users\/xoffey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xoffey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xoffey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xoffey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xoffey\/orgs","repos_url":"https:\/\/api.github.com\/users\/xoffey\/repos","events_url":"https:\/\/api.github.com\/users\/xoffey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xoffey\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1660766257000,"updated_at":1660766257000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip.\r\n\r\n## Steps to reproduce the bug\r\nconda install -c huggingface -c conda-forge datasets\r\n\r\n## Expected results\r\nShould have indicated successful installation.\r\n\r\n## Actual results\r\nSolving environment: failed with initial frozen solve. Retrying with flexible solve.\r\nSolving environment: failed with repodata from current_repodata.json, will retry with next repodata source.\r\n... took forever, so I cancelled it with ctrl-c\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.0 # after installing with pip\r\n- Platform: Windows-10-10.0.19044-SP0\r\n- Python version: 3.9.12\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.2\r\n- conda version: 4.13.0\r\n\r\nconda info\r\n\r\n active environment : base\r\n active env location : G:\\anaconda2022\r\n shell level : 1\r\n user config file : C:\\Users\\michael\\.condarc\r\n populated config files : C:\\Users\\michael\\.condarc\r\n conda version : 4.13.0\r\n conda-build version : 3.21.8\r\n python version : 3.9.12.final.0\r\n virtual packages : __cuda=11.1=0\r\n __win=0=0\r\n __archspec=1=x86_64\r\n base environment : G:\\anaconda2022 (writable)\r\n conda av data dir : G:\\anaconda2022\\etc\\conda\r\n conda av metadata url : None\r\n channel URLs : https:\/\/conda.anaconda.org\/pytorch\/win-64\r\n https:\/\/conda.anaconda.org\/pytorch\/noarch\r\n https:\/\/conda.anaconda.org\/huggingface\/win-64\r\n https:\/\/conda.anaconda.org\/huggingface\/noarch\r\n https:\/\/conda.anaconda.org\/conda-forge\/win-64\r\n https:\/\/conda.anaconda.org\/conda-forge\/noarch\r\n https:\/\/conda.anaconda.org\/anaconda-fusion\/win-64\r\n https:\/\/conda.anaconda.org\/anaconda-fusion\/noarch\r\n https:\/\/repo.anaconda.com\/pkgs\/main\/win-64\r\n https:\/\/repo.anaconda.com\/pkgs\/main\/noarch\r\n https:\/\/repo.anaconda.com\/pkgs\/r\/win-64\r\n https:\/\/repo.anaconda.com\/pkgs\/r\/noarch\r\n https:\/\/repo.anaconda.com\/pkgs\/msys2\/win-64\r\n https:\/\/repo.anaconda.com\/pkgs\/msys2\/noarch\r\n package cache : G:\\anaconda2022\\pkgs\r\n C:\\Users\\michael\\.conda\\pkgs\r\n C:\\Users\\michael\\AppData\\Local\\conda\\conda\\pkgs\r\n envs directories : G:\\anaconda2022\\envs\r\n C:\\Users\\michael\\.conda\\envs\r\n C:\\Users\\michael\\AppData\\Local\\conda\\conda\\envs\r\n platform : win-64\r\n user-agent : conda\/4.13.0 requests\/2.27.1 CPython\/3.9.12 Windows\/10 Windows\/10.0.19044\r\n administrator : False\r\n netrc file : None\r\n offline mode : False\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4859\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4859\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4858","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4858\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4858\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4858\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4858","id":1340859853,"node_id":"I_kwDODunzps5P6-XN","number":4858,"title":"map() function removes columns when input_columns is not None","user":{"login":"pramodith","id":16939722,"node_id":"MDQ6VXNlcjE2OTM5NzIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16939722?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pramodith","html_url":"https:\/\/github.com\/pramodith","followers_url":"https:\/\/api.github.com\/users\/pramodith\/followers","following_url":"https:\/\/api.github.com\/users\/pramodith\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pramodith\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pramodith\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pramodith\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pramodith\/orgs","repos_url":"https:\/\/api.github.com\/users\/pramodith\/repos","events_url":"https:\/\/api.github.com\/users\/pramodith\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pramodith\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Thanks for reporting! This looks like a bug. I've just opened a PR with the fix.","Awesome! Thank you. I'll close the issue once the PR gets merged. :-)"],"created_at":1660682550000,"updated_at":1663076928000,"closed_at":1663076928000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nThe map function, removes features from the dataset that are not present in the _input_columns_ list of columns, despite the columns being removed not mentioned in the _remove_columns_ argument.\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import Dataset\r\nds = Dataset.from_dict({\"a\" : [1,2,3],\"b\" : [0,1,0], \"c\" : [2,4,5]})\r\n\r\ndef double(x,y):\r\n x = x*2\r\n y = y*2\r\n return {\"d\" : x, \"e\" : y}\r\n\r\nds.map(double, input_columns=[\"a\",\"c\"])\r\n```\r\n\r\n## Expected results\r\n```\r\nDataset({\r\n features: ['a', 'b', 'c', 'd', 'e'],\r\n num_rows: 3\r\n})\r\n```\r\n## Actual results\r\n```\r\nDataset({\r\n features: ['a', 'c', 'd', 'e'],\r\n num_rows: 3\r\n})\r\n```\r\n\r\nIn this specific example feature **b** should not be removed.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: linux (colab)\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4858\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4858\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4857","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4857\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4857\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4857\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4857","id":1340397153,"node_id":"I_kwDODunzps5P5NZh","number":4857,"title":"No preprocessed wikipedia is working on huggingface\/datasets","user":{"login":"aninrusimha","id":30733039,"node_id":"MDQ6VXNlcjMwNzMzMDM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30733039?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aninrusimha","html_url":"https:\/\/github.com\/aninrusimha","followers_url":"https:\/\/api.github.com\/users\/aninrusimha\/followers","following_url":"https:\/\/api.github.com\/users\/aninrusimha\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aninrusimha\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aninrusimha\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aninrusimha\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aninrusimha\/orgs","repos_url":"https:\/\/api.github.com\/users\/aninrusimha\/repos","events_url":"https:\/\/api.github.com\/users\/aninrusimha\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aninrusimha\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting @aninrusimha.\r\n\r\nPlease, note that the preprocessed datasets are still available, as described in the dataset card, e.g.: https:\/\/huggingface.co\/datasets\/wikipedia\r\n```python\r\nds = load_dataset(\"wikipedia\", \"20220301.en\")\r\n``` ","This is working now, but I was getting an error a few days ago when running an existing script. Unfortunately I did not do a proper bug report, but for some reason I was unable to load the dataset due to a request being made to the wikimedia website. However, its working now. Thanks for the reply!"],"created_at":1660658133000,"updated_at":1660743308000,"closed_at":1660743308000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface\r\n\r\nhttps:\/\/huggingface.co\/datasets\/wikipedia\r\n\r\nhttps:\/\/dumps.wikimedia.org\/enwiki\/\r\n\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4857\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4857\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4856","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4856\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4856\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4856\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4856","id":1339779957,"node_id":"I_kwDODunzps5P22t1","number":4856,"title":"file missing when load_dataset with openwebtext on windows","user":{"login":"kingstarcraft","id":10361976,"node_id":"MDQ6VXNlcjEwMzYxOTc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10361976?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kingstarcraft","html_url":"https:\/\/github.com\/kingstarcraft","followers_url":"https:\/\/api.github.com\/users\/kingstarcraft\/followers","following_url":"https:\/\/api.github.com\/users\/kingstarcraft\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kingstarcraft\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kingstarcraft\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kingstarcraft\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kingstarcraft\/orgs","repos_url":"https:\/\/api.github.com\/users\/kingstarcraft\/repos","events_url":"https:\/\/api.github.com\/users\/kingstarcraft\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kingstarcraft\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have tried to extract ```0015896-b1054262f7da52a0518521e29c8e352c.txt``` from ```17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3\/urlsf_subset00-16_data.xz``` with 7-zip\r\nand put the file into cache_path ```F:\/\/huggingface\/datasets\/downloads\/extracted\/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa```\r\nthere is still raise the same error and I find the file was removed from cache_path after I run the run_mlm.py with ```python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:\/model\/roberta-base```."],"created_at":1660622662000,"updated_at":1660640792000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3\/urlsf_subset00-16_data.xz with 7-zip.\r\n\r\n## Steps to reproduce the bug\r\n```sh\r\npython run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:\/model\/roberta-base\r\n```\r\nor \r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"openwebtext\", None, cache_dir=None, use_auth_token=None)\r\n```\r\n\r\n## Expected results\r\nLoading is successful\r\n## Actual results\r\nTraceback (most recent call last):\r\n File \"D:\\Python\\v3.8.5\\lib\\site-packages\\datasets\\builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"D:\\Python\\v3.8.5\\lib\\site-packages\\datasets\\builder.py\", line 1227, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"D:\\Python\\v3.8.5\\lib\\site-packages\\datasets\\builder.py\", line 795, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 22] Invalid argument: 'F:\/\/huggingface\/datasets\/downloads\/extracted\/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa\/0015896-b1054262f7da52a0518521e29c8e352c.txt'\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: windows\r\n- Python version: 3.8.5 \r\n- PyArrow version: 9.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4856\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4856\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4855","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4855\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4855\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4855\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4855","id":1339699975,"node_id":"I_kwDODunzps5P2jMH","number":4855,"title":"Dataset Viewer issue for super_glue","user":{"login":"wzsxxa","id":54366859,"node_id":"MDQ6VXNlcjU0MzY2ODU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54366859?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wzsxxa","html_url":"https:\/\/github.com\/wzsxxa","followers_url":"https:\/\/api.github.com\/users\/wzsxxa\/followers","following_url":"https:\/\/api.github.com\/users\/wzsxxa\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wzsxxa\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wzsxxa\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wzsxxa\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wzsxxa\/orgs","repos_url":"https:\/\/api.github.com\/users\/wzsxxa\/repos","events_url":"https:\/\/api.github.com\/users\/wzsxxa\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wzsxxa\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting @wzsxxa.\r\n\r\nHowever the \"super_glue\" dataset is rendered properly by the Dataset preview: https:\/\/huggingface.co\/datasets\/super_glue"],"created_at":1660613696000,"updated_at":1661162881000,"closed_at":1661162865000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/super_glue\n\n### Description\n\ncan't view super_glue dataset on the web page\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4855\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4855\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4853","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4853\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4853\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4853\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4853","id":1339456490,"node_id":"PR_kwDODunzps49NFNL","number":4853,"title":"Fix bug and checksums in exams dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660594677000,"updated_at":1660632237000,"closed_at":1660631346000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix #4852.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4853\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4853\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4853","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4853","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4853.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4853.patch","merged_at":1660631346000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4852","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4852\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4852\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4852\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4852","id":1339450991,"node_id":"I_kwDODunzps5P1mZv","number":4852,"title":"Bug in multilingual_with_para config of exams dataset and checksums error","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @albertvillanova. Unfortunately I still get this error. Is this because the merge has yet to be released? Is there a way to track the release?","Hi @thesofakillers, yes you are right: the fix will be available after next release (it was planned for today; Monday at the latest).\r\n\r\nIn the meantime, you can use the version of the `exams` on our main branch by passing `revision` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"exams\", revision=\"main\")\r\n```"],"created_at":1660594492000,"updated_at":1663321855000,"closed_at":1660631347000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nThere is a bug for \"multilingual_with_para\" config in exams dataset:\r\n```python\r\nds = load_dataset(\".\/datasets\/exams\", split=\"train\")\r\n```\r\nraises:\r\n```\r\nKeyError: 'choices'\r\n```\r\n\r\nMoreover, there is a NonMatchingChecksumError:\r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/multilingual\/with_paragraphs\/train_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/multilingual\/with_paragraphs\/dev_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/multilingual\/with_paragraphs\/test_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/test_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_bg_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_bg_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_hr_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_hr_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_hu_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_hu_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_it_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_it_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_mk_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_mk_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_pl_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_pl_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_pt_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_pt_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_sq_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_sq_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_sr_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_sr_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_tr_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_tr_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/train_vi_with_para.jsonl.tar.gz', 'https:\/\/github.com\/mhardalov\/exams-qa\/raw\/main\/data\/exams\/cross-lingual\/with_paragraphs\/dev_vi_with_para.jsonl.tar.gz']\r\n```\r\n\r\nCC: @thesofakillers","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4852\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4852\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4851","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4851\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4851\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4851\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4851","id":1339085917,"node_id":"PR_kwDODunzps49L6ee","number":4851,"title":"Fix license tag and Source Data section in billsum dataset card","user":{"login":"kashif","id":8100,"node_id":"MDQ6VXNlcjgxMDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8100?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kashif","html_url":"https:\/\/github.com\/kashif","followers_url":"https:\/\/api.github.com\/users\/kashif\/followers","following_url":"https:\/\/api.github.com\/users\/kashif\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kashif\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kashif\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kashif\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kashif\/orgs","repos_url":"https:\/\/api.github.com\/users\/kashif\/repos","events_url":"https:\/\/api.github.com\/users\/kashif\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kashif\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","thanks @albertvillanova done thank you!"],"created_at":1660574220000,"updated_at":1661176584000,"closed_at":1661175659000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fixed the data source and license fields","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4851\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4851\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4851","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4851","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4851.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4851.patch","merged_at":1661175659000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4850","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4850\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4850\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4850\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4850","id":1338702306,"node_id":"PR_kwDODunzps49KnZ8","number":4850,"title":"Fix test of _get_extraction_protocol for TAR files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660552678000,"updated_at":1660556576000,"closed_at":1660555726000,"author_association":"MEMBER","active_lock_reason":null,"body":"While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https:\/\/github.com\/huggingface\/datasets\/runs\/7818845285?check_suite_focus=true\r\n```\r\nXPASS tests\/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol_throws[https:\/\/foo.bar\/train.tar] \r\n```\r\n\r\nThis PR:\r\n- refactors the test so that it tests the raise of the exceptions instead of xfailing\r\n- fixes the test for TAR files: it does not raise an exception, but returns \"tar\"\r\n- fixes some tests wrongly named: exchange `test_streaming_dl_manager_get_extraction_protocol` with `test_streaming_dl_manager_get_extraction_protocol_gg_drive`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4850\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4850\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4850","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4850","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4850.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4850.patch","merged_at":1660555726000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4849","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4849\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4849\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4849\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4849","id":1338273900,"node_id":"PR_kwDODunzps49JN8d","number":4849,"title":"1.18.x","user":{"login":"Mr-Robot-001","id":49282718,"node_id":"MDQ6VXNlcjQ5MjgyNzE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49282718?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Mr-Robot-001","html_url":"https:\/\/github.com\/Mr-Robot-001","followers_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/followers","following_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/orgs","repos_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/repos","events_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1660489759000,"updated_at":1660489802000,"closed_at":1660489802000,"author_association":"NONE","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4849\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4849\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4849","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4849","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4849.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4849.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4848","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4848\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4848\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4848\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4848","id":1338271833,"node_id":"PR_kwDODunzps49JNj_","number":4848,"title":"a","user":{"login":"Mr-Robot-001","id":49282718,"node_id":"MDQ6VXNlcjQ5MjgyNzE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49282718?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Mr-Robot-001","html_url":"https:\/\/github.com\/Mr-Robot-001","followers_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/followers","following_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/orgs","repos_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/repos","events_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1660489276000,"updated_at":1660489799000,"closed_at":1660489799000,"author_association":"NONE","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4848\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4848\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4848","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4848","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4848.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4848.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4847","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4847\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4847\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4847\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4847","id":1338270636,"node_id":"PR_kwDODunzps49JNWX","number":4847,"title":"Test win ci","user":{"login":"Mr-Robot-001","id":49282718,"node_id":"MDQ6VXNlcjQ5MjgyNzE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49282718?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Mr-Robot-001","html_url":"https:\/\/github.com\/Mr-Robot-001","followers_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/followers","following_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/orgs","repos_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/repos","events_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Mr-Robot-001\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1660489020000,"updated_at":1660489065000,"closed_at":1660489065000,"author_association":"NONE","active_lock_reason":null,"body":"aa","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4847\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4847\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4847","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4847","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4847.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4847.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4846","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4846\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4846\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4846\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4846","id":1337979897,"node_id":"PR_kwDODunzps49IYSC","number":4846,"title":"Update documentation card of miam dataset","user":{"login":"PierreColombo","id":22492839,"node_id":"MDQ6VXNlcjIyNDkyODM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22492839?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PierreColombo","html_url":"https:\/\/github.com\/PierreColombo","followers_url":"https:\/\/api.github.com\/users\/PierreColombo\/followers","following_url":"https:\/\/api.github.com\/users\/PierreColombo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PierreColombo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PierreColombo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PierreColombo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PierreColombo\/orgs","repos_url":"https:\/\/api.github.com\/users\/PierreColombo\/repos","events_url":"https:\/\/api.github.com\/users\/PierreColombo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PierreColombo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Ahahah :D not sur how i broke something by updating the README :D ","Thanks for the fix @PierreColombo. \r\n\r\nOnce a README is modified, our CI runs tests on it, requiring additional quality fixes, so that all READMEs are progressively improved and have some minimal tags\/sections\/information.\r\n\r\nFor this specific README file, the additional quality requirements of the CI are: https:\/\/github.com\/huggingface\/datasets\/runs\/7819924428?check_suite_focus=true\r\n```\r\nE The following issues were found for the README at `\/home\/runner\/work\/datasets\/datasets\/datasets\/miam\/README.md`:\r\nE -\tSection `Additional Information` is missing subsection: `Dataset Curators`.\r\nE -\tSection `Additional Information` is missing subsection: `Contributions`.\r\nE -\t`Additional Information` has an extra subsection: `Benchmark Curators`. Skipping further validation checks for this subsection as expected structure is unknown.\r\n```","Thanks a lot Albert :)))"],"created_at":1660401535000,"updated_at":1660697404000,"closed_at":1660472768000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hi !\r\nPaper has been published at EMNLP.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4846\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4846\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4846","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4846","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4846.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4846.patch","merged_at":1660472768000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4845","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4845\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4845\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4845\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4845","id":1337928283,"node_id":"PR_kwDODunzps49IOjf","number":4845,"title":"Mark CI tests as xfail if Hub HTTP error","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660387511000,"updated_at":1661230632000,"closed_at":1661229746000,"author_association":"MEMBER","active_lock_reason":null,"body":"In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors.\r\n\r\nThis PR:\r\n- marks tests as xfailed only if the Hub raises a 500 error for:\r\n - test_upstream_hub\r\n- makes pytest report the xfailed\/xpassed tests.\r\n\r\nMore tests could also be marked if needed.\r\n\r\nExamples of CI failures due to temporary Hub HTTP errors:\r\n- FAILED tests\/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files\r\n - https:\/\/github.com\/huggingface\/datasets\/runs\/7806855399?check_suite_focus=true\r\n `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https:\/\/hub-ci.huggingface.co\/api\/datasets\/__DUMMY_TRANSFORMERS_USER__\/test-16603108028233\/commit\/main (Request ID: aZeAQ5yLktoGHQYBcJ3zo)`\r\n- FAILED tests\/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_no_token\r\n - https:\/\/github.com\/huggingface\/datasets\/runs\/7840022996?check_suite_focus=true\r\n `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https:\/\/s3.us-east-1.amazonaws.com\/lfs-staging.huggingface.co\/repos\/81\/e3\/81e3b831fa9bf23190ec041f26ef7ff6d6b71c1a937b8ec1ef1f1f05b508c089\/caae596caa179cf45e7c9ac0c6d9a9cb0fe2d305291bfbb2d8b648ae26ed38b6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20220815%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220815T144713Z&X-Amz-Expires=900&X-Amz-Signature=5ddddfe8ef2b0601e80ab41c78a4d77d921942b0d8160bcab40ff894095e6823&X-Amz-SignedHeaders=host&x-id=PutObject`\r\n- FAILED tests\/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private\r\n - https:\/\/github.com\/huggingface\/datasets\/runs\/7835921082?check_suite_focus=true\r\n `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https:\/\/hub-ci.huggingface.co\/api\/repos\/create (Request ID: gL_1I7i2dii9leBhlZen-) - Internal Error - We're working hard to fix that as soon as possible!`\r\n- FAILED tests\/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_image_list\r\n - https:\/\/github.com\/huggingface\/datasets\/runs\/7835920900?check_suite_focus=true\r\n - This is not 500, but 404:\r\n `requests.exceptions.HTTPError: 404 Client Error: Not Found for url: [https:\/\/hub-ci.huggingface.co\/datasets\/__DUMMY_TRANSFORMERS_USER__\/test-16605586458339.git\/info\/lfs\/objects](https:\/\/hub-ci.huggingface.co\/datasets\/__DUMMY_TRANSFORMERS_USER__\/test-16605586458339.git\/info\/lfs\/objects\/batch)`\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4845\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4845\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4845","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4845","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4845.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4845.patch","merged_at":1661229746000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4844","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4844\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4844\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4844\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4844","id":1337878249,"node_id":"PR_kwDODunzps49IFLa","number":4844,"title":"Add 'val' to VALIDATION_KEYWORDS. ","user":{"login":"akt42","id":98386959,"node_id":"U_kgDOBd1EDw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/98386959?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akt42","html_url":"https:\/\/github.com\/akt42","followers_url":"https:\/\/api.github.com\/users\/akt42\/followers","following_url":"https:\/\/api.github.com\/users\/akt42\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akt42\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akt42\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akt42\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akt42\/orgs","repos_url":"https:\/\/api.github.com\/users\/akt42\/repos","events_url":"https:\/\/api.github.com\/users\/akt42\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akt42\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@mariosasko not sure about how the reviewing process works. Maybe you can have a look because we discussed this elsewhere?","Hi, thanks! \r\n\r\nLet's add one pattern with `val` to this test before merging: \r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/b88a656cf94c4ad972154371c83c1af759fde522\/tests\/test_data_files.py#L598","_The documentation is not available anymore as the PR was closed or merged._","@akt42 note that there is some info about splits keywords in the docs: https:\/\/huggingface.co\/docs\/datasets\/main\/en\/repository_structure#split-names-keywords. I agree it's not clear that it applies not only to filenames, but to directories as well.\r\n\r\nI think \"val\" should be now added to the documentation source file here: https:\/\/github.com\/huggingface\/datasets\/blob\/main\/docs\/source\/repository_structure.mdx?plain=1#L98","@polinaeterna Thanks for notifying us that there is a list of supported keywords\r\n\r\nI've added \"val\" to that list and a test."],"created_at":1660373381000,"updated_at":1661854655000,"closed_at":1661854494000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR fixes #4839 by adding the word `\"val\"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `\"val\"` as well.\r\n\r\nI think the supported keywords have to be mentioned in the documentation as well, but I couldn't think of a proper place to add that.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4844\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4844\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4844","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4844","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4844.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4844.patch","merged_at":1661854494000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4843","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4843\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4843\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4843\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4843","id":1337668699,"node_id":"PR_kwDODunzps49HaWT","number":4843,"title":"Fix typo in streaming docs","user":{"login":"flozi00","id":47894090,"node_id":"MDQ6VXNlcjQ3ODk0MDkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47894090?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/flozi00","html_url":"https:\/\/github.com\/flozi00","followers_url":"https:\/\/api.github.com\/users\/flozi00\/followers","following_url":"https:\/\/api.github.com\/users\/flozi00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/flozi00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/flozi00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/flozi00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/flozi00\/orgs","repos_url":"https:\/\/api.github.com\/users\/flozi00\/repos","events_url":"https:\/\/api.github.com\/users\/flozi00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/flozi00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660335501000,"updated_at":1660477410000,"closed_at":1660474929000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4843\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4843\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4843","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4843","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4843.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4843.patch","merged_at":1660474929000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4842","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4842\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4842\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4842\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4842","id":1337527764,"node_id":"PR_kwDODunzps49G8CC","number":4842,"title":"Update stackexchange license","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660325946000,"updated_at":1660473798000,"closed_at":1660472929000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The correct license of the stackexchange subset of the Pile is `cc-by-sa-4.0`, as can for example be seen here: https:\/\/stackoverflow.com\/help\/licensing","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4842\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4842\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4842","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4842","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4842.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4842.patch","merged_at":1660472929000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4841","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4841\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4841\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4841\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4841","id":1337401243,"node_id":"PR_kwDODunzps49Gf0I","number":4841,"title":"Update ted_talks_iwslt license to include ND","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660320892000,"updated_at":1660475722000,"closed_at":1660474822000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Excerpt from the paper's abstract: \"Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community\"","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4841\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4841\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4841","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4841","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4841.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4841.patch","merged_at":1660474822000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4840","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4840\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4840\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4840\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4840","id":1337342672,"node_id":"I_kwDODunzps5PtjrQ","number":4840,"title":"Dataset Viewer issue for darragh\/demo_data_raw3","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["do you have an idea of why it can occur @huggingface\/datasets? The dataset consists of a single parquet file.","Thanks for reporting @severo.\r\n\r\nI'm not able to reproduce that error. I get instead:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: 'orix\/data\/ChiSig\/\u5510\u5408\u4e50-9-3.jpg'\r\n```\r\n\r\nWhich pyarrow version are you using? Mine is 6.0.1. ","OK, I get now your error when not streaming.","OK!\r\n\r\nIf it's useful, the pyarrow version is 7.0.0:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets-server\/blob\/487c39d87998f8d5a35972f1027d6c8e588e622d\/services\/worker\/poetry.lock#L1537-L1543","Apparently, there is something weird with that Parquet file: its schema is:\r\n```\r\nimages: extension>\r\n```\r\n\r\nI have forced a right schema:\r\n```python\r\nfrom datasets import Features, Image, load_dataset\r\n\r\nfeatures = Features({\"images\": Image()})\r\nds = datasets.load_dataset(\"parquet\", split=\"train\", data_files=\"train-00000-of-00001.parquet\", features=features)\r\n```\r\nand then recreated a new Parquet file:\r\n```python\r\nds.to_parquet(\"train.parquet\")\r\n```\r\n\r\nNow this Parquet file has the right schema:\r\n```\r\nimages: struct\r\n child 0, bytes: binary\r\n child 1, path: string\r\n```\r\nand can be loaded normally:\r\n```python\r\nIn [26]: ds = load_dataset(\"parquet\", split=\"train\", data_files=\"dataset.parquet\")\r\nn [27]: ds\r\nOut[27]: \r\nDataset({\r\n features: ['images'],\r\n num_rows: 20\r\n})\r\n```"],"created_at":1660317778000,"updated_at":1662623744000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/darragh\/demo_data_raw3\n\n### Description\n\n```\r\nException: ValueError\r\nMessage: Arrow type extension> does not have a datasets dtype equivalent.\r\n```\r\nreported by @NielsRogge \n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4840\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4840\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4839","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4839\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4839\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4839\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4839","id":1337206377,"node_id":"I_kwDODunzps5PtCZp","number":4839,"title":"ImageFolder dataset builder does not read the validation data set if it is named as \"val\"","user":{"login":"akt42","id":98386959,"node_id":"U_kgDOBd1EDw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/98386959?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akt42","html_url":"https:\/\/github.com\/akt42","followers_url":"https:\/\/api.github.com\/users\/akt42\/followers","following_url":"https:\/\/api.github.com\/users\/akt42\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akt42\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akt42\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akt42\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akt42\/orgs","repos_url":"https:\/\/api.github.com\/users\/akt42\/repos","events_url":"https:\/\/api.github.com\/users\/akt42\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akt42\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"akt42","id":98386959,"node_id":"U_kgDOBd1EDw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/98386959?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akt42","html_url":"https:\/\/github.com\/akt42","followers_url":"https:\/\/api.github.com\/users\/akt42\/followers","following_url":"https:\/\/api.github.com\/users\/akt42\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akt42\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akt42\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akt42\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akt42\/orgs","repos_url":"https:\/\/api.github.com\/users\/akt42\/repos","events_url":"https:\/\/api.github.com\/users\/akt42\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akt42\/received_events","type":"User","site_admin":false},"assignees":[{"login":"akt42","id":98386959,"node_id":"U_kgDOBd1EDw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/98386959?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akt42","html_url":"https:\/\/github.com\/akt42","followers_url":"https:\/\/api.github.com\/users\/akt42\/followers","following_url":"https:\/\/api.github.com\/users\/akt42\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akt42\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akt42\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akt42\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akt42\/orgs","repos_url":"https:\/\/api.github.com\/users\/akt42\/repos","events_url":"https:\/\/api.github.com\/users\/akt42\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akt42\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["#take"],"created_at":1660310760000,"updated_at":1661854495000,"closed_at":1661854495000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nCurrently, the `'imagefolder'` data set builder in [`load_dataset()`](https:\/\/github.com\/huggingface\/datasets\/blob\/2.4.0\/src\/datasets\/load.py#L1541] ) only [supports](https:\/\/github.com\/huggingface\/datasets\/blob\/6c609a322da994de149b2c938f19439bca99408e\/src\/datasets\/data_files.py#L31) the following names as the validation data set directory name: `[\"validation\", \"valid\", \"dev\"]`. When the validation directory is named as `'val'`, the Data set will not have a validation split. I expected this to be a trivial task but ended up spending a lot of time before knowing that only the above names are supported.\r\n\r\nHere's a minimal example of `val` not being recognized:\r\n\r\n```python\r\nimport os\r\nimport numpy as np\r\nimport cv2 \r\nfrom datasets import load_dataset\r\n\r\n# creating a dummy data set with the following structure:\r\n\r\n# ROOT\r\n# | -- train\r\n# | ---- class_1\r\n# | ---- class_2\r\n# | -- val\r\n# | ---- class_1\r\n# | ---- class_2\r\n\r\n\r\nROOT = \"data\"\r\n\r\n\r\nfor which in [\"train\", \"val\"]:\r\n for class_name in [\"class_1\", \"class_2\"]:\r\n dir_name = os.path.join(ROOT, which, class_name)\r\n if not os.path.exists(dir_name):\r\n os.makedirs(dir_name)\r\n for i in range(10):\r\n cv2.imwrite(\r\n os.path.join(dir_name, f\"{i}.png\"),\r\n np.random.random((224, 224))\r\n )\r\n\r\n# trying to create a data set\r\ndataset = load_dataset(\r\n \"imagefolder\", \r\n data_dir=ROOT\r\n)\r\n\r\n>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 20\r\n })\r\n})\r\n\r\n# ^ note how the dataset only has a 'train' subset\r\n\r\n```\r\n\r\n**Describe the solution you'd like**\r\n\r\nThe suggestion is to include `\"val\"` to [that list ](https:\/\/github.com\/huggingface\/datasets\/blob\/6c609a322da994de149b2c938f19439bca99408e\/src\/datasets\/data_files.py#L31) as that's a commonly used phrase to name the validation directory. \r\n\r\nAlso, In the documentation, explicitly mention that only such directory names are supported as train\/val\/test directories to avoid confusion.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nIn the documentation, explicitly mention that only such directory names are supported as train\/val\/test directories without adding `val` to the above list.\r\n\r\n\r\n**Additional context**\r\n\r\nA question asked in the forum: [\r\nLoading an imagenet-style image dataset with train\/val directories](https:\/\/discuss.huggingface.co\/t\/loading-an-imagenet-style-image-dataset-with-train-val-directories\/21554)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4839\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4839\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4838","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4838\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4838\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4838\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4838","id":1337194918,"node_id":"PR_kwDODunzps49F08R","number":4838,"title":"Fix documentation card of adv_glue dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The failing test has nothing to do with this PR:\r\n```\r\nFAILED tests\/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files\r\n```"],"created_at":1660310126000,"updated_at":1660558634000,"closed_at":1660557731000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix documentation card of adv_glue dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4838\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4838\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4838","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4838","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4838.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4838.patch","merged_at":1660557731000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4837","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4837\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4837\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4837\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4837","id":1337079723,"node_id":"PR_kwDODunzps49Fb6l","number":4837,"title":"Add support for CSV metadata files to ImageFolder","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Cool thanks ! Maybe let's include this change after the refactoring from FolderBasedBuilder in #3963 to avoid dealing with too many unpleasant conflicts ?","@lhoestq I resolved the conflicts (AudioFolder also supports CSV metadata now). Let me know what you think.\r\n","@lhoestq Thanks for the suggestion! Indeed it makes more sense to use CSV as the default format in the folder-based builders."],"created_at":1660303158000,"updated_at":1661947287000,"closed_at":1661947147000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4814","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4837\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4837\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4837","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4837","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4837.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4837.patch","merged_at":1661947147000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4836","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4836\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4836\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4836\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4836","id":1337067632,"node_id":"I_kwDODunzps5Psghw","number":4836,"title":"Is it possible to pass multiple links to a split in load script?","user":{"login":"sadrasabouri","id":43045767,"node_id":"MDQ6VXNlcjQzMDQ1NzY3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43045767?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sadrasabouri","html_url":"https:\/\/github.com\/sadrasabouri","followers_url":"https:\/\/api.github.com\/users\/sadrasabouri\/followers","following_url":"https:\/\/api.github.com\/users\/sadrasabouri\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sadrasabouri\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sadrasabouri\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sadrasabouri\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sadrasabouri\/orgs","repos_url":"https:\/\/api.github.com\/users\/sadrasabouri\/repos","events_url":"https:\/\/api.github.com\/users\/sadrasabouri\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sadrasabouri\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1660302371000,"updated_at":1660302371000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nI wanted to use a python loading script in hugging face datasets that use different sources of text (it's somehow a compilation of multiple datasets + my own dataset) based on how `load_dataset` [works](https:\/\/huggingface.co\/docs\/datasets\/loading) I assumed I could do something like bellow in my loading script:\r\n\r\n```python\r\n...\r\n_URL = \"MY_DATASET_URL\/resolve\/main\/data\/\"\r\n_URLS = {\r\n \"train\": [\r\n \"FIRST_URL_TO.txt\",\r\n _URL + \"train-00000-of-00001-676bfebbc8742592.parquet\"\r\n ]\r\n}\r\n...\r\n```\r\nbut when loading the dataset it raises the following error:\r\n```python\r\nFile ~\/.local\/lib\/python3.8\/site-packages\/datasets\/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 702 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 703 if not downloaded_from_gcs:\r\n--> 704 self._download_and_prepare(\r\n 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n...\r\n 668 if isinstance(a, str):\r\n 669 # Force-cast str subclasses to str (issue #21127)\r\n 670 parts.append(str(a))\r\n\r\nTypeError: expected str, bytes or os.PathLike object, not list\r\n```\r\n\r\n**Describe the solution you'd like**\r\nI believe since it's possible for `load_dataset` to get list of URLs instead of just a URL for `train` split it can be possible here too.\r\n\r\n**Describe alternatives you've considered**\r\nAn alternative solution would be to download all needed datasets locally and `push_to_hub` them all, but since the datasets I'm talking about are huge it's not among my options. \r\n\r\n**Additional context**\r\nI think loading `text` beside the `parquet` is completely a different issue but I believe I can figure it out by proposing a config for my dataset to load each entry of `_URLS['train']` separately either by `load_dataset(\"text\", ...` or `load_dataset(\"parquet\", ...`.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4836\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4836\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4835","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4835\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4835\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4835\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4835","id":1336994835,"node_id":"PR_kwDODunzps49FJg9","number":4835,"title":"Fix documentation card of ethos dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660297866000,"updated_at":1660310035000,"closed_at":1660309179000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix documentation card of ethos dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4835\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4835\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4835","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4835","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4835.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4835.patch","merged_at":1660309179000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4834","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4834\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4834\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4834\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4834","id":1336993511,"node_id":"PR_kwDODunzps49FJOu","number":4834,"title":"Fix documentation card of recipe_nlg dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660297779000,"updated_at":1660303698000,"closed_at":1660302820000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix documentation card of recipe_nlg dataset","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4834\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4834\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4834","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4834","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4834.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4834.patch","merged_at":1660302820000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4833","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4833\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4833\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4833\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4833","id":1336946965,"node_id":"PR_kwDODunzps49E_Nk","number":4833,"title":"Fix missing tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660295092000,"updated_at":1660298427000,"closed_at":1660297555000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix missing tags in dataset cards.\r\n\r\nThis PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4833\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4833\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4833","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4833","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4833.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4833.patch","merged_at":1660297555000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4832","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4832\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4832\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4832\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4832","id":1336727389,"node_id":"PR_kwDODunzps49EQav","number":4832,"title":"Fix tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The non-passing tests are caused by other missing information in the dataset cards."],"created_at":1660277483000,"updated_at":1660279315000,"closed_at":1660278444000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix wrong tags in dataset cards.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4832\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4832\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4832","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4832","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4832.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4832.patch","merged_at":1660278444000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4831","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4831\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4831\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4831\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4831","id":1336199643,"node_id":"PR_kwDODunzps49Cibf","number":4831,"title":"Add oversampling strategies to interleave datasets","user":{"login":"ylacombe","id":52246514,"node_id":"MDQ6VXNlcjUyMjQ2NTE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52246514?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ylacombe","html_url":"https:\/\/github.com\/ylacombe","followers_url":"https:\/\/api.github.com\/users\/ylacombe\/followers","following_url":"https:\/\/api.github.com\/users\/ylacombe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ylacombe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ylacombe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ylacombe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ylacombe\/orgs","repos_url":"https:\/\/api.github.com\/users\/ylacombe\/repos","events_url":"https:\/\/api.github.com\/users\/ylacombe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ylacombe\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4831). All of your documentation changes will be reflected on that endpoint.","Hi @lhoestq, \r\nThanks for your review! I've added the requested mention in the documentation and corrected the Error type in `interleave_datasets`. \r\nI've also added test cases in `test_arrow_dataset.py`, which was useful since it allow me to detect an error in the case of an oversampling strategy with no sampling probabilities. \r\nCould you double check this part ? I've commented the code to explain the approach.\r\nThanks!\r\n"],"created_at":1660235091000,"updated_at":1661415669000,"closed_at":1661359567000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hello everyone,\r\nHere is a proposal to improve `interleave_datasets` function.\r\nFollowing Issue #3064, and @lhoestq [comment](https:\/\/github.com\/huggingface\/datasets\/issues\/3064#issuecomment-1022333385), I propose here a code that performs oversampling when interleaving a `Dataset` list. \r\n\r\nI have myself encountered this problem while trying to implement training on a multilingual dataset following a training strategy similar to that of [XLSUM paper](https:\/\/arxiv.org\/pdf\/2106.13822.pdf), a multilingual abstract summary dataset where the multilingual training dataset is created by sampling from the languages following a smoothing strategy. The main idea is to sample languages that have a low number of samples more frequently than other languages.\r\n\r\nAs in Issue #3064, the current default strategy is a undersampling strategy, which stops as soon as a dataset runs out of samples. The new `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once. \r\n\r\nHow does it work in practice:\r\n- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.\r\n- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.\r\n- In the other cases, it is supposed to keep the same behaviour as before. Except that this time, when probabilities are precised, it really stops AS SOON AS a dataset is out of samples. \r\n\r\nMore on the last sentence:\r\nThe previous example of `interleave_datasets` was:\r\n\r\n >>> from datasets import Dataset, interleave_datasets\r\n >>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n >>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n >>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n >>> dataset = interleave_datasets([d1, d2, d3])\r\n >>> dataset[\"a\"]\r\n [0, 10, 20, 1, 11, 21, 2, 12, 22]\r\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n >>> dataset[\"a\"]\r\n [10, 0, 11, 1, 2, 20, 12]\r\n\r\nWith my implementation, `dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)` gives:\r\n >>> dataset[\"a\"]\r\n [10, 0, 11, 1, 2]\r\nbecause `d1` is already out of samples just after `2` is added.\r\n\r\n Example of the results of applying the different strategies:\r\n\r\n >>> from datasets import Dataset, interleave_datasets\r\n >>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n >>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n >>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy=\"all_exhausted\")\r\n >>> dataset[\"a\"]\r\n [10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22]\r\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n >>> dataset[\"a\"]\r\n [10, 0, 11, 1, 2]\r\n >>> dataset = interleave_datasets([d1, d2, d3])\r\n >>> dataset[\"a\"]\r\n [0, 10, 20, 1, 11, 21, 2, 12, 22]\r\n >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy=\"all_exhausted\")\r\n >>> dataset[\"a\"]\r\n [0, 10, 20, 1, 11, 21, 2, 12, 22]\r\n >>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n >>> d2 = Dataset.from_dict({\"a\": [10, 11, 12, 13]})\r\n >>> d3 = Dataset.from_dict({\"a\": [20, 21, 22, 23, 24]})\r\n >>> dataset = interleave_datasets([d1, d2, d3])\r\n >>> dataset[\"a\"]\r\n [0, 10, 20, 1, 11, 21, 2, 12, 22]\r\n >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy=\"all_exhausted\")\r\n >>> dataset[\"a\"]\r\n [0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]\r\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n >>> dataset[\"a\"]\r\n [10, 0, 11, 1, 2]\r\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy=\"all_exhausted\")\r\n >>> dataset[\"a\"]\r\n [10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24]\r\n\r\n**Final note:** I've been using that code for a research project involving a large-scale multilingual dataset. One should be careful when using oversampling to avoid to avoid exploding the size of the dataset. For example, if a very large data set has a low probability of being sampled, the final dataset may be several times the size of that large data set.\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4831\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4831\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4831","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4831","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4831.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4831.patch","merged_at":1661359567000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4830","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4830\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4830\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4830\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4830","id":1336177937,"node_id":"PR_kwDODunzps49Cdro","number":4830,"title":"Fix task tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The non-passing tests are caused by other missing information in the dataset cards."],"created_at":1660233966000,"updated_at":1660235847000,"closed_at":1660234980000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4830\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4830\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4830","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4830","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4830.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4830.patch","merged_at":1660234980000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4829","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4829\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4829\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4829\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4829","id":1336068068,"node_id":"I_kwDODunzps5Posfk","number":4829,"title":"Misalignment between card tag validation and docs","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["(Note that the doc is aligned with the hub validation rules, and the \"ground truth\" is the hub validation rules given that they apply to all datasets, not just the canonical ones)"],"created_at":1660229085000,"updated_at":1660229195000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nAs pointed out in other issue: https:\/\/github.com\/huggingface\/datasets\/pull\/4827#discussion_r943536284\r\nthe validation of the dataset card tags is not aligned with its documentation: e.g.\r\n- implementation: `license: List[str]`\r\n- docs: `license: Union[str, List[str]]`\r\n\r\nThey should be aligned.\r\n\r\nCC: @julien-c \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4829\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4829\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4828","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4828\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4828\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4828\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4828","id":1336040168,"node_id":"PR_kwDODunzps49B_vb","number":4828,"title":"Support PIL Image objects in `add_item`\/`add_column`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4828). All of your documentation changes will be reflected on that endpoint."],"created_at":1660227945000,"updated_at":1661182703000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4796 \r\n\r\nPS: We should also improve the type inference in `OptimizedTypeSequence` to make it possible to also infer the complex types (only `Image` currently) in nested arrays (e.g. `[[pil_image], [pil_image, pil_image]]` or `[{\"img\": pil_image}`]), but I plan to address this in a separate PR.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4828\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4828\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4828","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4828","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4828.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4828.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4827","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4827\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4827\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4827\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4827","id":1335994312,"node_id":"PR_kwDODunzps49B1zi","number":4827,"title":"Add license metadata to pg19","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660225940000,"updated_at":1660230063000,"closed_at":1660229198000,"author_association":"MEMBER","active_lock_reason":null,"body":"As reported over email by Roy Rijkers","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4827\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4827\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4827","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4827","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4827.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4827.patch","merged_at":1660229198000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4826","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4826\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4826\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4826\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4826","id":1335987583,"node_id":"PR_kwDODunzps49B0V3","number":4826,"title":"Fix language tags in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The non-passing tests are caused by other missing information in the dataset cards."],"created_at":1660225634000,"updated_at":1660227468000,"closed_at":1660226592000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4826\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4826\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4826","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4826","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4826.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4826.patch","merged_at":1660226592000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4825","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4825\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4825\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4825\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4825","id":1335856882,"node_id":"PR_kwDODunzps49BYWL","number":4825,"title":"[Windows] Fix Access Denied when using os.rename()","user":{"login":"DougTrajano","id":8703022,"node_id":"MDQ6VXNlcjg3MDMwMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8703022?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DougTrajano","html_url":"https:\/\/github.com\/DougTrajano","followers_url":"https:\/\/api.github.com\/users\/DougTrajano\/followers","following_url":"https:\/\/api.github.com\/users\/DougTrajano\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DougTrajano\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DougTrajano\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DougTrajano\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DougTrajano\/orgs","repos_url":"https:\/\/api.github.com\/users\/DougTrajano\/repos","events_url":"https:\/\/api.github.com\/users\/DougTrajano\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DougTrajano\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?","> Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?\r\n\r\nYes, I think that could be a better solution, but I didn't test it in Linux (e.g. Ubuntu) to guarantee that `os.rename()` could be completely replaced by `shutil.move()`.","AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)","> AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)\r\n\r\nalright, let me change the PR then.","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4825). All of your documentation changes will be reflected on that endpoint.","Hi @lhoestq looks like one of the tests failed, but is not related to this change, do I need to do something from my side?"],"created_at":1660219035000,"updated_at":1661346547000,"closed_at":1661346547000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"In this PR, we are including an additional step when `os.rename()` raises a PermissionError.\r\n\r\nBasically, we will use `shutil.move()` on the temp files.\r\n\r\nFix #2937 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4825\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4825\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4825","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4825","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4825.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4825.patch","merged_at":1661346547000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4824","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4824\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4824\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4824\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4824","id":1335826639,"node_id":"PR_kwDODunzps49BR5H","number":4824,"title":"Fix titles in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The non-passing tests are caused by other missing information in the dataset cards."],"created_at":1660217268000,"updated_at":1660225571000,"closed_at":1660222609000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix all the titles in the dataset cards, so that they conform to the required format.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4824\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4824\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4824","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4824","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4824.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4824.patch","merged_at":1660222609000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4823","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4823\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4823\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4823\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4823","id":1335687033,"node_id":"PR_kwDODunzps49A0O_","number":4823,"title":"Update data URL in mkqa dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660209373000,"updated_at":1660211510000,"closed_at":1660210672000,"author_association":"MEMBER","active_lock_reason":null,"body":"Update data URL in mkqa dataset.\r\n\r\nFix #4817.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4823\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4823\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4823","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4823","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4823.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4823.patch","merged_at":1660210671000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4822","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4822\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4822\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4822\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4822","id":1335675352,"node_id":"I_kwDODunzps5PnMnY","number":4822,"title":"Moving dataset between namespaces breaks dataset viewer","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Let's keep open for now. We should try to reproduce"],"created_at":1660208730000,"updated_at":1663358589000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nI moved a dataset from my own namespace to an org and that broke the dataset viewer. To fix it I had to manually edit the `dataset_info.json` file and change the first key in the json from `username--datasetname` to `orgname--datasetname`\r\n\r\n## Steps to reproduce the bug\r\nWhat I did was: \r\n1- Upload a dataset to my own namespace using `push_to_hub`\r\n2- Move the dataset from my namespace to an org using the web interface.\r\n\r\n## Expected results\r\nFor the file to be changed accordingly.\r\n\r\n## Actual results\r\nBroken dataset viewer.\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.3.dev0\r\n- Platform: Linux-4.15.0-189-generic-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.5\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4822\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4822\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4821","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4821\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4821\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4821\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4821","id":1335664588,"node_id":"PR_kwDODunzps49AvaE","number":4821,"title":"Fix train_test_split docs","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660208145000,"updated_at":1660211969000,"closed_at":1660211140000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4821\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4821\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4821","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4821","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4821.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4821.patch","merged_at":1660211140000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4820","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4820\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4820\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4820\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4820","id":1335117132,"node_id":"I_kwDODunzps5PlEVM","number":4820,"title":"Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.","user":{"login":"talhaanwarch","id":37379131,"node_id":"MDQ6VXNlcjM3Mzc5MTMx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37379131?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/talhaanwarch","html_url":"https:\/\/github.com\/talhaanwarch","followers_url":"https:\/\/api.github.com\/users\/talhaanwarch\/followers","following_url":"https:\/\/api.github.com\/users\/talhaanwarch\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/talhaanwarch\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/talhaanwarch\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/talhaanwarch\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/talhaanwarch\/orgs","repos_url":"https:\/\/api.github.com\/users\/talhaanwarch\/repos","events_url":"https:\/\/api.github.com\/users\/talhaanwarch\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/talhaanwarch\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Fixed by installing either resampy<3 or resampy>=4"],"created_at":1660160553000,"updated_at":1660161190000,"closed_at":1660161190000,"author_association":"NONE","active_lock_reason":null,"body":"Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https:\/\/colab.research.google.com\/github\/patrickvonplaten\/notebooks\/blob\/master\/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.\r\nI got this error\r\nTerminating: fork() called from a process already using GNU OpenMP, this is unsafe.\r\nThere is no other logs available, so i have no clue what is the cause of it.\r\n```\r\n\r\ndef prepare_dataset(batch):\r\n audio = batch[\"path\"]\r\n # batched output is \"un-batched\"\r\n batch[\"input_values\"] = processor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n with processor.as_target_processor():\r\n batch[\"labels\"] = processor(batch[\"text\"]).input_ids\r\n return batch\r\n\r\ndata = data.map(prepare_dataset, remove_columns=data.column_names[\"train\"],\r\n num_proc=4)\r\n```\r\n\r\n\r\nSpecify the actual results or traceback.\r\nThere is no traceback except\r\n`Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.`\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4820\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4820\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4819","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4819\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4819\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4819\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4819","id":1335064449,"node_id":"PR_kwDODunzps48-xc6","number":4819,"title":"Add missing language tags to resources","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660158402000,"updated_at":1660160749000,"closed_at":1660159935000,"author_association":"MEMBER","active_lock_reason":null,"body":"Add missing language tags to resources, required by existing datasets on GitHub.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4819\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4819\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4819","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4819","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4819.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4819.patch","merged_at":1660159935000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4818","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4818\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4818\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4818\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4818","id":1334941810,"node_id":"PR_kwDODunzps48-W7a","number":4818,"title":"Add add cc-by-sa-2.5 license tag","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4818). All of your documentation changes will be reflected on that endpoint."],"created_at":1660151919000,"updated_at":1660154101000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"- [ ] add it to moon-landing\r\n- [ ] add it to hub-docs ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4818\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4818\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4818","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4818","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4818.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4818.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4817","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4817\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4817\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4817\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4817","id":1334572163,"node_id":"I_kwDODunzps5Pi_SD","number":4817,"title":"Outdated Link for mkqa Dataset","user":{"login":"liaeh","id":52380283,"node_id":"MDQ6VXNlcjUyMzgwMjgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52380283?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/liaeh","html_url":"https:\/\/github.com\/liaeh","followers_url":"https:\/\/api.github.com\/users\/liaeh\/followers","following_url":"https:\/\/api.github.com\/users\/liaeh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/liaeh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/liaeh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/liaeh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/liaeh\/orgs","repos_url":"https:\/\/api.github.com\/users\/liaeh\/repos","events_url":"https:\/\/api.github.com\/users\/liaeh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/liaeh\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @liaeh, we are investigating this. "],"created_at":1660135545000,"updated_at":1660210672000,"closed_at":1660210672000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nThe URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https:\/\/github.com\/apple\/ml-mkqa\/blob\/main\/dataset\/mkqa.jsonl.gz instead of https:\/\/github.com\/apple\/ml-mkqa\/raw\/master\/dataset\/mkqa.jsonl.gz (master branch has been renamed to main).\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"mkqa\")\r\n```\r\n\r\n## Expected results\r\ndownloads the dataset\r\n\r\n## Actual results\r\n```python\r\nDownloading builder script:\r\n4.79k\/? [00:00<00:00, 201kB\/s]\r\nDownloading metadata:\r\n13.2k\/? [00:00<00:00, 504kB\/s]\r\n\r\nDownloading and preparing dataset mkqa\/mkqa (download: 11.35 MiB, generated: 34.29 MiB, post-processed: Unknown size, total: 45.65 MiB) to \/home\/lhr\/.cache\/huggingface\/datasets\/mkqa\/mkqa\/1.0.0\/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d...\r\n\r\nDownloading data files: 0%\r\n0\/1 [00:00()\r\n 1 from datasets import load_dataset\r\n----> 3 dataset = load_dataset(\"mkqa\")\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1745 # Download and prepare data\r\n-> 1746 builder_instance.download_and_prepare(\r\n 1747 download_config=download_config,\r\n 1748 download_mode=download_mode,\r\n 1749 ignore_verifications=ignore_verifications,\r\n 1750 try_from_hf_gcs=try_from_hf_gcs,\r\n 1751 use_auth_token=use_auth_token,\r\n 1752 )\r\n 1754 # Build dataset for splits\r\n 1755 keep_in_memory = (\r\n 1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1757 )\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 702 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 703 if not downloaded_from_gcs:\r\n--> 704 self._download_and_prepare(\r\n 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 706 )\r\n 707 # Sync info\r\n 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)\r\n 1226 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 769 split_dict = SplitDict(dataset_name=self.name)\r\n 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 773 # Checksums verification\r\n 774 if verify_infos and dl_manager.record_checksums:\r\n\r\nFile ~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/mkqa\/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d\/mkqa.py:130, in Mkqa._split_generators(self, dl_manager)\r\n 128 # download and extract URLs\r\n 129 urls_to_download = _URLS\r\n--> 130 downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n 132 return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": downloaded_files[\"train\"]})]\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/download\/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls)\r\n 415 def download_and_extract(self, url_or_urls):\r\n 416 \"\"\"Download and extract given url_or_urls.\r\n 417 \r\n 418 Is roughly equivalent to:\r\n (...)\r\n 429 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 430 \"\"\"\r\n--> 431 return self.extract(self.download(url_or_urls))\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/download\/download_manager.py:309, in DownloadManager.download(self, url_or_urls)\r\n 306 download_func = partial(self._download, download_config=download_config)\r\n 308 start_time = datetime.now()\r\n--> 309 downloaded_path_or_paths = map_nested(\r\n 310 download_func,\r\n 311 url_or_urls,\r\n 312 map_tuple=True,\r\n 313 num_proc=download_config.num_proc,\r\n 314 disable_tqdm=not is_progress_bar_enabled(),\r\n 315 desc=\"Downloading data files\",\r\n 316 )\r\n 317 duration = datetime.now() - start_time\r\n 318 logger.info(f\"Downloading took {duration.total_seconds() \/\/ 60} min\")\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py:393, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)\r\n 391 num_proc = 1\r\n 392 if num_proc <= 1 or len(iterable) <= num_proc:\r\n--> 393 mapped = [\r\n 394 _single_map_nested((function, obj, types, None, True, None))\r\n 395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 396 ]\r\n 397 else:\r\n 398 split_kwds = [] # We organize the splits ourselve (contiguous splits)\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py:394, in (.0)\r\n 391 num_proc = 1\r\n 392 if num_proc <= 1 or len(iterable) <= num_proc:\r\n 393 mapped = [\r\n--> 394 _single_map_nested((function, obj, types, None, True, None))\r\n 395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 396 ]\r\n 397 else:\r\n 398 split_kwds = [] # We organize the splits ourselve (contiguous splits)\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py:330, in _single_map_nested(args)\r\n 328 # Singleton first to spare some computation\r\n 329 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 330 return function(data_struct)\r\n 332 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 333 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/download\/download_manager.py:335, in DownloadManager._download(self, url_or_filename, download_config)\r\n 332 if is_relative_path(url_or_filename):\r\n 333 # append the relative path to the base_path\r\n 334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)\r\n--> 335 return cached_path(url_or_filename, download_config=download_config)\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py:185, in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 181 url_or_filename = str(url_or_filename)\r\n 183 if is_remote_url(url_or_filename):\r\n 184 # URL, so get it from the cache (downloading if necessary)\r\n--> 185 output_path = get_from_cache(\r\n 186 url_or_filename,\r\n 187 cache_dir=cache_dir,\r\n 188 force_download=download_config.force_download,\r\n 189 proxies=download_config.proxies,\r\n 190 resume_download=download_config.resume_download,\r\n 191 user_agent=download_config.user_agent,\r\n 192 local_files_only=download_config.local_files_only,\r\n 193 use_etag=download_config.use_etag,\r\n 194 max_retries=download_config.max_retries,\r\n 195 use_auth_token=download_config.use_auth_token,\r\n 196 ignore_url_params=download_config.ignore_url_params,\r\n 197 download_desc=download_config.download_desc,\r\n 198 )\r\n 199 elif os.path.exists(url_or_filename):\r\n 200 # File, and it exists.\r\n 201 output_path = url_or_filename\r\n\r\nFile ~\/repos\/punc-cap\/venv\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py:530, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)\r\n 525 raise FileNotFoundError(\r\n 526 f\"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been\"\r\n 527 \" disabled. To enable file online look-ups, set 'local_files_only' to False.\"\r\n 528 )\r\n 529 elif response is not None and response.status_code == 404:\r\n--> 530 raise FileNotFoundError(f\"Couldn't find file at {url}\")\r\n 531 _raise_if_offline_mode_is_enabled(f\"Tried to reach {url}\")\r\n 532 if head_error is not None:\r\n\r\nFileNotFoundError: Couldn't find file at https:\/\/github.com\/apple\/ml-mkqa\/raw\/master\/dataset\/mkqa.jsonl.gz\r\n\r\n\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4817\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4817\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4816","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4816\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4816\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4816\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4816","id":1334099454,"node_id":"PR_kwDODunzps487kpq","number":4816,"title":"Update version of opus_paracrawl dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660109984000,"updated_at":1660314749000,"closed_at":1660313876000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR updates OPUS ParaCrawl from 7.1 to 9 version.\r\n\r\n\r\nFix #4815.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4816\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4816\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4816","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4816","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4816.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4816.patch","merged_at":1660313876000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4815","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4815\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4815\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4815\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4815","id":1334078303,"node_id":"I_kwDODunzps5PhGtf","number":4815,"title":"Outdated loading script for OPUS ParaCrawl dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1660108354000,"updated_at":1660313877000,"closed_at":1660313877000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nOur loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4815\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4815\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4814","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4814\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4814\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4814\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4814","id":1333356230,"node_id":"I_kwDODunzps5PeWbG","number":4814,"title":"Support CSV as metadata file format in AudioFolder\/ImageFolder","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1660055809000,"updated_at":1661947148000,"closed_at":1661947148000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Requested here: https:\/\/discuss.huggingface.co\/t\/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach\/21004. CSV is also used in AutoTrain for specifying metadata in image datasets.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4814\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4814\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4813","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4813\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4813\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4813\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4813","id":1333287756,"node_id":"PR_kwDODunzps48446r","number":4813,"title":"Fix loading example in opus dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660052858000,"updated_at":1660067535000,"closed_at":1660066698000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- fixes the examples to load the datasets, with the corrected dataset name, in their dataset cards for:\r\n - opus_dgt\r\n - opus_paracrawl\r\n - opus_wikipedia\r\n- fixes their dataset cards with the missing required information: title, data instances\/fields\/splits\r\n- enumerates the supported languages\r\n- adds a missing citation reference for opus_wikipedia\r\n\r\nRelated to:\r\n- #4806","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4813\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4813\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4813","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4813","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4813.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4813.patch","merged_at":1660066698000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4812","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4812\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4812\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4812\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4812","id":1333051730,"node_id":"PR_kwDODunzps484Fzq","number":4812,"title":"Fix bug in function validate_type for Python >= 3.9","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1660041162000,"updated_at":1660311683000,"closed_at":1660310824000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix `validate_type` function, so that it uses `get_origin` instead. This makes the function forward compatible.\r\n\r\nThis fixes #4811 because:\r\n```python\r\nIn [4]: typing.Optional[str]\r\nOut[4]: typing.Optional[str]\r\n\r\nIn [5]: get_origin(typing.Optional[str])\r\nOut[5]: typing.Union\r\n```\r\n\r\nFix #4811.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4812\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4812\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4812","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4812","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4812.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4812.patch","merged_at":1660310824000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4811","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4811\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4811\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4811\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4811","id":1333043421,"node_id":"I_kwDODunzps5PdKDd","number":4811,"title":"Bug in function validate_type for Python >= 3.9","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1660040721000,"updated_at":1660310825000,"closed_at":1660310825000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nThe function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.\r\n```python\r\nIn [4]: typing.Optional[str]\r\nOut[4]: typing.Union[str, NoneType]\r\n```\r\n\r\nHowever, this is not the case for Python 3.9:\r\n```python\r\nIn [3]: typing.Optional[str]\r\nOut[3]: typing.Optional[str]\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4811\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4811\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4810","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4810\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4810\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4810\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4810","id":1333038702,"node_id":"PR_kwDODunzps484C9l","number":4810,"title":"hellaswag: add non-empty description to fix metadata issue","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4810). All of your documentation changes will be reflected on that endpoint.","Are the `metadata JSON file` not on their way to deprecation? \ud83d\ude06\ud83d\ude07\r\n\r\nIMO, more generally than this particular PR, the contribution process should be simplified now that many validation checks happen on the hub side.\r\n\r\nKeeping this open in the meantime to get more potential feedback!"],"created_at":1660040474000,"updated_at":1660227062000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4810\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4810\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4810","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4810","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4810.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4810.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4809","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4809\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4809\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4809\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4809","id":1332842747,"node_id":"PR_kwDODunzps483Y4h","number":4809,"title":"Complete the mlqa dataset card","user":{"login":"eldhoittangeorge","id":7940237,"node_id":"MDQ6VXNlcjc5NDAyMzc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7940237?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eldhoittangeorge","html_url":"https:\/\/github.com\/eldhoittangeorge","followers_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/followers","following_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/orgs","repos_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/repos","events_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","> Thanks for your contribution, @eldhoittangeorge.\r\n> \r\n> The CI error message: https:\/\/github.com\/huggingface\/datasets\/runs\/7743526624?check_suite_focus=true\r\n> \r\n> ```\r\n> E ValueError: The following issues have been found in the dataset cards:\r\n> E YAML tags:\r\n> E __init__() missing 5 required positional arguments: 'annotations_creators', 'language_creators', 'license', 'size_categories', and 'source_datasets'\r\n> ```\r\n\r\nI will fix the CI error.","@eldhoittangeorge, thanks again for all the fixes. Just a minor one before we can merge this PR: https:\/\/github.com\/huggingface\/datasets\/runs\/7744885754?check_suite_focus=true\r\n```\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language_creators':\r\nE \t['unknown'] are not registered tags for 'language_creators', reference at https:\/\/github.com\/huggingface\/datasets\/tree\/main\/src\/datasets\/utils\/resources\/creators.json\r\n```","> \r\n\r\nThanks, I updated the file. \r\nA small suggestion can you mention this link https:\/\/github.com\/huggingface\/datasets\/tree\/main\/src\/datasets\/utils\/resources\/ in the contribution page. So that others will know the acceptable values for the tags."],"created_at":1660030686000,"updated_at":1660062381000,"closed_at":1660051603000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I fixed the issue #4808 \r\n\r\nDetails of PR:\r\n- Added languages included in the dataset. \r\n- Added task id and task category. \r\n- Updated the citation information. \r\n\r\nFix #4808.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4809\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4809\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4809","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4809","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4809.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4809.patch","merged_at":1660051603000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4808","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4808\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4808\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4808\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4808","id":1332840217,"node_id":"I_kwDODunzps5PcYcZ","number":4808,"title":"Add more information to the dataset card of mlqa dataset ","user":{"login":"eldhoittangeorge","id":7940237,"node_id":"MDQ6VXNlcjc5NDAyMzc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7940237?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eldhoittangeorge","html_url":"https:\/\/github.com\/eldhoittangeorge","followers_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/followers","following_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/orgs","repos_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/repos","events_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"eldhoittangeorge","id":7940237,"node_id":"MDQ6VXNlcjc5NDAyMzc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7940237?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eldhoittangeorge","html_url":"https:\/\/github.com\/eldhoittangeorge","followers_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/followers","following_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/orgs","repos_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/repos","events_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/received_events","type":"User","site_admin":false},"assignees":[{"login":"eldhoittangeorge","id":7940237,"node_id":"MDQ6VXNlcjc5NDAyMzc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7940237?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eldhoittangeorge","html_url":"https:\/\/github.com\/eldhoittangeorge","followers_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/followers","following_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/orgs","repos_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/repos","events_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eldhoittangeorge\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["#self-assign","Fixed by:\r\n- #4809"],"created_at":1660030542000,"updated_at":1660052003000,"closed_at":1660052003000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4808\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4808\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4807","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4807\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4807\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4807\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4807","id":1332784110,"node_id":"PR_kwDODunzps483MSH","number":4807,"title":"document fix in opus_gnome dataset","user":{"login":"gojiteji","id":38291975,"node_id":"MDQ6VXNlcjM4MjkxOTc1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38291975?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gojiteji","html_url":"https:\/\/github.com\/gojiteji","followers_url":"https:\/\/api.github.com\/users\/gojiteji\/followers","following_url":"https:\/\/api.github.com\/users\/gojiteji\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gojiteji\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gojiteji\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gojiteji\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gojiteji\/orgs","repos_url":"https:\/\/api.github.com\/users\/gojiteji\/repos","events_url":"https:\/\/api.github.com\/users\/gojiteji\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gojiteji\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Duplicate:\r\n- #4806 "],"created_at":1660027093000,"updated_at":1660030083000,"closed_at":1660030083000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I fixed a issue #4805.\r\n\r\nI changed `\"gnome\"` to `\"opus_gnome\"` in[ README.md](https:\/\/github.com\/huggingface\/datasets\/tree\/main\/datasets\/opus_gnome#dataset-summary).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4807\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4807\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4807","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4807","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4807.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4807.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4806","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4806\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4806\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4806\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4806","id":1332664038,"node_id":"PR_kwDODunzps482yiS","number":4806,"title":"Fix opus_gnome dataset card","user":{"login":"gojiteji","id":38291975,"node_id":"MDQ6VXNlcjM4MjkxOTc1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38291975?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gojiteji","html_url":"https:\/\/github.com\/gojiteji","followers_url":"https:\/\/api.github.com\/users\/gojiteji\/followers","following_url":"https:\/\/api.github.com\/users\/gojiteji\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gojiteji\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gojiteji\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gojiteji\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gojiteji\/orgs","repos_url":"https:\/\/api.github.com\/users\/gojiteji\/repos","events_url":"https:\/\/api.github.com\/users\/gojiteji\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gojiteji\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@gojiteji why have you closed this PR and created an identical one?\r\n- #4807 ","@albertvillanova \r\nI forgot to follow \"How to create a Pull\" in CONTRIBUTING.md in this branch.","Both are identical. And you can push additional commits to this branch.","I see. Thank you for your comment.","Anyway, @gojiteji thanks for your contribution and this fix.","Once you have modified the `opus_gnome` dataset card, our Continuous Integration test suite performs some tests on it that make some additional requirements: the errors that appear have nothing to do with your contribution, but with these additional quality requirements.","> the errors that appear have nothing to do with your contribution, but with these additional quality requirements.\r\n\r\nIs there anything I should do?","If you would like to address them as well in this PR, it would be awesome: https:\/\/github.com\/huggingface\/datasets\/runs\/7741104780?check_suite_focus=true\r\n","These are the 2 error messages:\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `\/home\/runner\/work\/datasets\/datasets\/datasets\/opus_gnome\/README.md`:\r\nE -\tNo first-level heading starting with `Dataset Card for` found in README. Skipping further validation for this README.\r\n\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language':\r\nE \t['ara', 'cat', 'foo', 'gr', 'nqo', 'tmp'] are not registered tags for 'language', reference at https:\/\/github.com\/huggingface\/datasets\/tree\/main\/src\/datasets\/utils\/resources\/languages.json\r\n```","In principle there are 2 errors:\r\n\r\nThe first one says, the title of the README does not start with `Dataset Card for`:\r\n- The README title is: `# Dataset Card Creation Guide`\r\n- According to the [template here](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/templates\/README.md), it should be: `# Dataset Card for [Dataset Name]`","In relation with the languages:\r\n- you should check whether the language codes are properly spelled\r\n- and if so, adding them to our `languages.json` file, so that they are properly validated","Thank you for the detailed information. I'm checking it now.","```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `\/home\/runner\/work\/datasets\/datasets\/datasets\/opus_gnome\/README.md`:\r\nE -\tExpected some content in section `Data Instances` but it is empty.\r\nE -\tExpected some content in section `Data Fields` but it is empty.\r\nE -\tExpected some content in section `Data Splits` but it is empty.\r\n```","I added `ara`, `cat`, `gr`, and `nqo` to `languages.json` and removed `foo` and `tmp` from `README.md`.\r\nI also write Data Instances, Data Fields, and Data Splits in `README.md`.","Thanks for your investigation and fixes to the dataset card structure! I'm just making some suggestions before merging this PR: see below.","Should I create PR for `config.json` to add ` ara cat gr nqo` first?\r\nI think I can pass this failing after that.\r\n\r\nOr removing `ara, cat, gr, nqo, foo, tmp` from `README.md`. ","Once you address these issues, all the CI tests will pass.","Once the remaining changes are addressed (see unresolved above), we will be able to merge this:\r\n- [ ] Remove \"ara\" from README\r\n- [ ] Remove \"cat\" from README\r\n- [ ] Remove \"gr\" from README\r\n- [ ] Replace \"tmp\" with \"tyj\" in README\r\n- [ ] Add \"tyj\" to `languages.json`:\r\n ```\r\n \"tyj\": \"Tai Do; Tai Yo\",","I did the five changes."],"created_at":1660016415000,"updated_at":1660046806000,"closed_at":1660045924000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I fixed a issue #4805.\r\n\r\nI changed `\"gnome\"` to `\"opus_gnome\"` in[ README.md](https:\/\/github.com\/huggingface\/datasets\/tree\/main\/datasets\/opus_gnome#dataset-summary).\r\n\r\nFix #4805","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4806\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":1,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4806\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4806","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4806","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4806.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4806.patch","merged_at":1660045924000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4805","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4805\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4805\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4805\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4805","id":1332653531,"node_id":"I_kwDODunzps5Pbq3b","number":4805,"title":"Wrong example in opus_gnome dataset card","user":{"login":"gojiteji","id":38291975,"node_id":"MDQ6VXNlcjM4MjkxOTc1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38291975?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gojiteji","html_url":"https:\/\/github.com\/gojiteji","followers_url":"https:\/\/api.github.com\/users\/gojiteji\/followers","following_url":"https:\/\/api.github.com\/users\/gojiteji\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gojiteji\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gojiteji\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gojiteji\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gojiteji\/orgs","repos_url":"https:\/\/api.github.com\/users\/gojiteji\/repos","events_url":"https:\/\/api.github.com\/users\/gojiteji\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gojiteji\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1660015287000,"updated_at":1660045925000,"closed_at":1660045925000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nI found that [the example on opus_gone dataset ](https:\/\/github.com\/huggingface\/datasets\/tree\/main\/datasets\/opus_gnome#dataset-summary) doesn't work.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n load_dataset(\"gnome\", lang1=\"it\", lang2=\"pl\")\r\n```\r\n`\"gnome\"` should be `\"opus_gnome\"`\r\n\r\n## Expected results\r\n```bash\r\n100%\r\n1\/1 [00:00<00:00, 42.09it\/s]\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'translation'],\r\n num_rows: 8368\r\n })\r\n})\r\n```\r\n\r\n## Actual results\r\n```bash\r\n Couldn't find 'gnome' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/main\/datasets\/gnome\/gnome.py\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.27\r\n- Python version: 3.9.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4805\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4805\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4804","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4804\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4804\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4804\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4804","id":1332630358,"node_id":"I_kwDODunzps5PblNW","number":4804,"title":"streaming dataset with concatenating splits raises an error","user":{"login":"Bing-su","id":37621276,"node_id":"MDQ6VXNlcjM3NjIxMjc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37621276?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bing-su","html_url":"https:\/\/github.com\/Bing-su","followers_url":"https:\/\/api.github.com\/users\/Bing-su\/followers","following_url":"https:\/\/api.github.com\/users\/Bing-su\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bing-su\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bing-su\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bing-su\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bing-su\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bing-su\/repos","events_url":"https:\/\/api.github.com\/users\/Bing-su\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bing-su\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Only the name of a particular split (\"train\", \"test\", ...) is supported as a split pattern if `streaming=True`. We plan to address this limitation soon."],"created_at":1660012916000,"updated_at":1660740156000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nstreaming dataset with concatenating splits raises an error\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# no error\r\nrepo = \"nateraw\/ade20k-tiny\"\r\ndataset = load_dataset(repo, split=\"train+validation\")\r\n```\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# error\r\nrepo = \"nateraw\/ade20k-tiny\"\r\ndataset = load_dataset(repo, split=\"train+validation\", streaming=True)\r\n```\r\n\r\n```sh\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in ()\r\n 3 # error\r\n 4 repo = \"nateraw\/ade20k-tiny\"\r\n----> 5 dataset = load_dataset(repo, split=\"train+validation\", streaming=True)\r\n\r\n1 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py](https:\/\/localhost:8080\/#) in as_streaming_dataset(self, split, base_path)\r\n 1030 splits_generator = splits_generators[split]\r\n 1031 else:\r\n-> 1032 raise ValueError(f\"Bad split: {split}. Available splits: {list(splits_generators)}\")\r\n 1033 \r\n 1034 # Create a dataset for each of the given splits\r\n\r\nValueError: Bad split: train+validation. Available splits: ['validation', 'train']\r\n```\r\n\r\n[Colab](https:\/\/colab.research.google.com\/drive\/1wMj08_0bym9jnGgByib4lsBPu8NCZBG9?usp=sharing)\r\n\r\n## Expected results\r\nload successfully or throws an error saying it is not supported.\r\n\r\n## Actual results\r\nabove\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.0\r\n- Platform: Windows-10-10.0.22000-SP0 (windows11 x64)\r\n- Python version: 3.9.13\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4804\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4804\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4803","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4803\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4803\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4803\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4803","id":1332079562,"node_id":"I_kwDODunzps5PZevK","number":4803,"title":"Support `pipeline` argument in inspect.py functions","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1659974484000,"updated_at":1659974484000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nThe `wikipedia` dataset requires a `pipeline` argument to build the list of splits:\r\n\r\nhttps:\/\/huggingface.co\/datasets\/wikipedia\/blob\/main\/wikipedia.py#L937\r\n\r\nBut this is currently not supported in `get_dataset_config_info`:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/main\/src\/datasets\/inspect.py#L373-L375\r\n\r\nwhich is called by other functions, e.g. `get_dataset_split_names`.\r\n\r\n**Additional context**\r\n\r\nThe dataset viewer is not working out-of-the-box on `wikipedia` for this reason:\r\n\r\nhttps:\/\/huggingface.co\/datasets\/wikipedia\/viewer\r\n\r\n\"Capture\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4803\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4803\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4802","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4802\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4802\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4802\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4802","id":1331676691,"node_id":"I_kwDODunzps5PX8YT","number":4802,"title":"`with_format` behavior is inconsistent on different datasets","user":{"login":"fxmarty","id":9808326,"node_id":"MDQ6VXNlcjk4MDgzMjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9808326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fxmarty","html_url":"https:\/\/github.com\/fxmarty","followers_url":"https:\/\/api.github.com\/users\/fxmarty\/followers","following_url":"https:\/\/api.github.com\/users\/fxmarty\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fxmarty\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fxmarty\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fxmarty\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fxmarty\/orgs","repos_url":"https:\/\/api.github.com\/users\/fxmarty\/repos","events_url":"https:\/\/api.github.com\/users\/fxmarty\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fxmarty\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! You can get a `torch.Tensor` if you do the following:\r\n```python\r\nraw = load_dataset(\"beans\", split=\"train\")\r\nraw = raw.select(range(100))\r\n\r\npreprocessor = AutoFeatureExtractor.from_pretrained(\"nateraw\/vit-base-beans\")\r\n\r\nfrom datasets import Array3D\r\nfeatures = raw.features.copy()\r\nfeatures[\"pixel_values\"] = datasets.Array3D(shape=(3, 224, 224), dtype=\"float32\")\r\n\r\ndef preprocess_func(examples):\r\n imgs = [img.convert(\"RGB\") for img in examples[\"image\"]]\r\n return preprocessor(imgs)\r\n\r\ndata = raw.map(preprocess_func, batched=True, features=features)\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n\r\ndata = data.with_format(\"torch\", columns=[\"pixel_values\"])\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n```\r\n\r\nThe reason for this \"inconsistency\" in the default case is the way PyArrow infers the type of multi-dim arrays (in this case, the `pixel_values` column). If the type is not specified manually, PyArrow assumes it is a dynamic-length sequence (it needs to know the type before writing the first batch to a cache file, and it can't be sure the array is fixed ahead of time; `ArrayXD` is our way of telling that the dims are fixed), so it already fails to convert the corresponding array to NumPy properly (you get an array of `np.object` arrays). And `with_format(\"torch\")` replaces NumPy arrays with Torch tensors, so this bad formatting propagates."],"created_at":1659955294000,"updated_at":1660063749000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nI found a case where `with_format` does not transform the dataset to the requested format.\r\n\r\n## Steps to reproduce the bug\r\nRun:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoFeatureExtractor\r\nfrom datasets import load_dataset\r\n\r\nraw = load_dataset(\"glue\", \"sst2\", split=\"train\")\r\nraw = raw.select(range(100))\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"philschmid\/tiny-bert-sst2-distilled\")\r\n\r\ndef preprocess_func(examples):\r\n return tokenizer(examples[\"sentence\"], padding=True, max_length=256, truncation=True)\r\n\r\ndata = raw.map(preprocess_func, batched=True)\r\n\r\nprint(type(data[0][\"input_ids\"]))\r\n\r\ndata = data.with_format(\"torch\", columns=[\"input_ids\"])\r\n\r\nprint(type(data[0][\"input_ids\"]))\r\n```\r\n\r\nprinting as expected:\r\n\r\n```python\r\n\r\n\r\n```\r\n\r\nThen run:\r\n\r\n```python\r\nraw = load_dataset(\"beans\", split=\"train\")\r\nraw = raw.select(range(100))\r\n\r\npreprocessor = AutoFeatureExtractor.from_pretrained(\"nateraw\/vit-base-beans\")\r\n\r\ndef preprocess_func(examples):\r\n imgs = [img.convert(\"RGB\") for img in examples[\"image\"]]\r\n return preprocessor(imgs)\r\n\r\ndata = raw.map(preprocess_func, batched=True)\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n\r\ndata = data.with_format(\"torch\", columns=[\"pixel_values\"])\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n```\r\n\r\nPrinting, unexpectedly\r\n\r\n```python\r\n\r\n\r\n```\r\n\r\n## Expected results\r\n`with_format` should transform into the requested format; it's not the case.\r\n\r\n## Actual results\r\n`type(data[0][\"pixel_values\"])` should be `torch.Tensor` in the example above\r\n\r\n## Environment info\r\n\r\n- `datasets` version: dev version, commit 44af3fafb527302282f6b6507b952de7435f0979\r\n- Platform: Linux\r\n- Python version: 3.9.12\r\n- PyArrow version: 7.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4802\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4802\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4801","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4801\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4801\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4801\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4801","id":1331337418,"node_id":"PR_kwDODunzps48yTYu","number":4801,"title":"Fix fine classes in trec dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659935462000,"updated_at":1661185754000,"closed_at":1661184855000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- replaces the fine labels, so that there are 50 instead of 47\r\n- once more labels are added, all they (fine and coarse) have been re-ordered, so that they align with the order in: https:\/\/cogcomp.seas.upenn.edu\/Data\/QA\/QC\/definition.html\r\n- the feature names have been fixed: `fine_label` instead of `label-fine`\r\n - to sneak-case (underscores instead of hyphens)\r\n - words have been reordered\r\n\r\nFix #4790.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4801\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4801\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4801","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4801","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4801.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4801.patch","merged_at":1661184855000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4800","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4800\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4800\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4800\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4800","id":1331288128,"node_id":"PR_kwDODunzps48yIss","number":4800,"title":"support LargeListArray in pyarrow","user":{"login":"xwwwwww","id":48146603,"node_id":"MDQ6VXNlcjQ4MTQ2NjAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48146603?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xwwwwww","html_url":"https:\/\/github.com\/xwwwwww","followers_url":"https:\/\/api.github.com\/users\/xwwwwww\/followers","following_url":"https:\/\/api.github.com\/users\/xwwwwww\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xwwwwww\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xwwwwww\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xwwwwww\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xwwwwww\/orgs","repos_url":"https:\/\/api.github.com\/users\/xwwwwww\/repos","events_url":"https:\/\/api.github.com\/users\/xwwwwww\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xwwwwww\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4800). All of your documentation changes will be reflected on that endpoint.","Hi, thanks for working on this! Can you run `make style` at the repo root to fix the code quality error in CI and add a test?","Hi, I have fixed the code quality error and added a test","It seems that CI fails due to the lack of memory for allocating a large array, while I pass the test locally.","Also, the current implementation of the NumPy-to-PyArrow conversion creates a lot of copies, which is not ideal for large arrays.\r\n\r\nWe can improve performance significantly if we rewrite this part:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/83f695c14507a3a38e9f4d84612cf49e5f50c153\/src\/datasets\/features\/features.py#L1322-L1323\r\n\r\nas\r\n```python\r\n values = pa.array(arr.ravel(), type=type) \r\n```","@xwwwwww Feel free to ignore https:\/\/github.com\/huggingface\/datasets\/pull\/4800#issuecomment-1212280549 and revert the changes you've made to address it. \r\n\r\nWithout copying the array, this would be possible:\r\n```python\r\narr = np.array([\r\n [1, 2, 3],\r\n [4, 5, 6]\r\n])\r\n\r\ndset = Dataset.from_dict({\"data\": [arr]})\r\n\r\narr[0][0] = 100 # this change would be reflected in dset's PyArrow table -> a breaking change and also probably unexpected by the user \r\n```","> @xwwwwww Feel free to ignore [#4800 (comment)](https:\/\/github.com\/huggingface\/datasets\/pull\/4800#issuecomment-1212280549) and revert the changes you've made to address it.\r\n> \r\n> Without copying the array, this would be possible:\r\n> \r\n> ```python\r\n> arr = np.array([\r\n> [1, 2, 3],\r\n> [4, 5, 6]\r\n> ])\r\n> \r\n> dset = Dataset.from_dict({\"data\": [arr]})\r\n> \r\n> arr[0][0] = 100 # this change would be reflected in dset's PyArrow table -> a breaking change and also probably unexpected by the user \r\n> ```\r\n\r\nOh, that makes sense.","passed tests in ubuntu while failed in windows","@mariosasko Hi, do you have any clue about this failure in windows?","Perhaps we can skip the added test on Windows then.\r\n\r\nNot sure if this can help, but the ERR tool available on Windows outputs the following for the returned error code `-1073741819`:\r\n```\r\n# for decimal -1073741819 \/ hex 0xc0000005\r\n ISCSI_ERR_SETUP_NETWORK_NODE iscsilog.h\r\n# Failed to setup initiator portal. Error status is given in\r\n# the dump data.\r\n STATUS_ACCESS_VIOLATION ntstatus.h\r\n# The instruction at 0x%p referenced memory at 0x%p. The\r\n# memory could not be %s.\r\n USBD_STATUS_DEV_NOT_RESPONDING usb.h\r\n# as an HRESULT: Severity: FAILURE (1), FACILITY_NONE (0x0), Code 0x5\r\n# for decimal 5 \/ hex 0x5\r\n WINBIO_FP_TOO_FAST winbio_err.h\r\n# Move your finger more slowly on the fingerprint reader.\r\n# as an HRESULT: Severity: FAILURE (1), FACILITY_NULL (0x0), Code 0x5\r\n ERROR_ACCESS_DENIED winerror.h\r\n# Access is denied.\r\n# 5 matches found for \"-1073741819\"\r\n```","What's the proper way to skip the added test in windows?\r\nI tried `if platform.system() == 'Linux'`, but the CI test seems stuck","@mariosasko Hi, any idea about this :)","Hi again! We want to skip the test on Windows but not on Linux. You can use this decorator to do so: \r\n```python\r\n@pytest.mark.skipif(os.name == \"nt\" and (os.getenv(\"CIRCLECI\") == \"true\" or os.getenv(\"GITHUB_ACTIONS\") == \"true\"), reason=\"The Windows CI runner does not have enough RAM to run this test\")\r\n@pytest.mark.parametrize(...)\r\ndef test_large_array_xd_with_np(...):\r\n ...\r\n```","> Hi again! We want to skip the test on Windows but not on Linux. You can use this decorator to do so:\r\n> \r\n> ```python\r\n> @pytest.mark.skipif(os.name == \"nt\" and (os.getenv(\"CIRCLECI\") == \"true\" or os.getenv(\"GITHUB_ACTIONS\") == \"true\"), reason=\"The Windows CI runner does not have enough RAM to run this test\")\r\n> @pytest.mark.parametrize(...)\r\n> def test_large_array_xd_with_np(...):\r\n> ...\r\n> ```\r\n\r\nCI on windows still stucks :(","@mariosasko Hi, could you please take a look at this issue","@mariosasko Hi, all checks have passed, and we are finally ready to merge this PR :)","@lhoestq @albertvillanova Perhaps other maintainers can take a look and merge this PR :)"],"created_at":1659931126000,"updated_at":1663347270000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"```python\r\nimport numpy as np\r\nimport datasets\r\na = np.zeros((5000000, 768))\r\nres = datasets.Dataset.from_dict({\"embedding\": a})\r\n\r\n'''\r\n File \"\/home\/wenjiaxin\/anaconda3\/envs\/data\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 178, in __arrow_array__\r\n out = numpy_to_pyarrow_listarray(data)\r\n File \"\/home\/wenjiaxin\/anaconda3\/envs\/data\/lib\/python3.8\/site-packages\/datasets\/features\/features.py\", line 1173, in numpy_to_pyarrow_listarray\r\n offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32())\r\n File \"pyarrow\/array.pxi\", line 312, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 83, in pyarrow.lib._ndarray_to_array\r\n File \"pyarrow\/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Integer value 2147483904 not in range: -2147483648 to 2147483647\r\n'''\r\n```\r\n\r\n\r\nLoading a large numpy array currently raises the error above as the type of offsets is `int32`. \r\nAnd pyarrow has supported [LargeListArray](https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.LargeListArray.html) for this case.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4800\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4800\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4800","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4800","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4800.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4800.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4799","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4799\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4799\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4799\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4799","id":1330889854,"node_id":"I_kwDODunzps5PU8R-","number":4799,"title":"video dataset loader\/parser","user":{"login":"nollied","id":26421036,"node_id":"MDQ6VXNlcjI2NDIxMDM2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26421036?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nollied","html_url":"https:\/\/github.com\/nollied","followers_url":"https:\/\/api.github.com\/users\/nollied\/followers","following_url":"https:\/\/api.github.com\/users\/nollied\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nollied\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nollied\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nollied\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nollied\/orgs","repos_url":"https:\/\/api.github.com\/users\/nollied\/repos","events_url":"https:\/\/api.github.com\/users\/nollied\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nollied\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! We've just started discussing the video support in `datasets` (decoding backends, video feature type, etc.), so I believe we should have something tangible by the end of this year.\r\n\r\nAlso, if you have additional video features in mind that you would like to see, feel free to let us know","Coool thanks @mariosasko "],"created_at":1659837252000,"updated_at":1660063371000,"closed_at":1660063371000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"you know how you can [use `load_dataset` with any arbitrary csv file](https:\/\/huggingface.co\/docs\/datasets\/loading#csv)? and you can also [use it to load a local image dataset](https:\/\/huggingface.co\/docs\/datasets\/image_load#local-files)?\r\n\r\ncould you please add functionality to load a video dataset? it would be really cool if i could point it to a bunch of video files and use pytorch to start looping through batches of videos. like if my batch size is 16, each sample in the batch is a frame from a video. i'm competing in the [minerl challenge](https:\/\/www.aicrowd.com\/challenges\/neurips-2022-minerl-basalt-competition), and it would be awesome to use the HF ecosystem.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4799\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4799\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4798","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4798\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4798\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4798\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4798","id":1330699942,"node_id":"PR_kwDODunzps48wVEG","number":4798,"title":"Shard generator","user":{"login":"marianna13","id":43296932,"node_id":"MDQ6VXNlcjQzMjk2OTMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43296932?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/marianna13","html_url":"https:\/\/github.com\/marianna13","followers_url":"https:\/\/api.github.com\/users\/marianna13\/followers","following_url":"https:\/\/api.github.com\/users\/marianna13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/marianna13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/marianna13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/marianna13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/marianna13\/orgs","repos_url":"https:\/\/api.github.com\/users\/marianna13\/repos","events_url":"https:\/\/api.github.com\/users\/marianna13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/marianna13\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, thanks!\r\n\r\n> I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to \"split\" these large datasets into chunks with equal size\r\n\r\n`map`, the method we use for processing in `datasets`, already does that if `batched=True`. And you can control the batch size with `batch_size`.\r\n\r\n> Even better - be able to run through these chunks one by one in simple and convenient way\r\n\r\nIt's not hard to do this \"manually\" with the existing API:\r\n```python\r\nbatch_size = \r\nfor i in range(len(dset) \/\/ batch_size)\r\n shard = dset[i * batch_size:(i+1) * batch_size] # a dict of lists\r\n shard = Dataset.from_dict(shard)\r\n```\r\n(should be of similar performance to your implementation)\r\n\r\nStill, I think an API like that could be useful if implemented efficiently (see [this](https:\/\/discuss.huggingface.co\/t\/why-is-it-so-slow-to-access-data-through-iteration-with-hugginface-dataset\/20385) discussion to understand what's the issue with `select`\/`__getitem__` on which your implementation relies on), which can be done with `pa.Table.to_reader` in PyArrow 8.0.0+, .\r\n\r\n@lhoestq @albertvillanova wdyt? We could use such API to efficiently iterate over the batches in `map` before processing them.","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4798). All of your documentation changes will be reflected on that endpoint.","This is more efficient since it doesn't bring the data in memory:\r\n```python\r\nfor i in range(len(dset) \/\/ batch_size)\r\n start = i * batch_size\r\n end = min((i+1) * batch_size, len(dset))\r\n shard = dset.select(range(start, end))\r\n```\r\n\r\n@marianna13 can you give more details on when it would be handy to have this shard generator ?","> This is more efficient since it doesn't bring the data in memory:\r\n> \r\n> ```python\r\n> for i in range(len(dset) \/\/ batch_size)\r\n> start = i * batch_size\r\n> end = min((i+1) * batch_size, len(dset))\r\n> shard = dset.select(range(start, end))\r\n> ```\r\n> \r\n> @marianna13 can you give more details on when it would be handy to have this shard generator ?\r\n\r\nSure! I used such generator when I needed to process a very large dataset (>1TB) in parallel, I've found out empirically that it's much more efficient to do that by processing only one part of the dataset with the shard generator. I tried to use a map with batching but it causesd oom errors, I tried to use the normal shard and here's what I came up with. So I thought it might be helpful to someone else!","I see thanks ! `map` should work just fine even at this scale, feel free to open an issue if you'd like to discuss your OOM issue.\r\n\r\nRegarding `shard_generator`, since it is pretty straightforward to get shards I'm not sure we need that extra Dataset method"],"created_at":1659777246000,"updated_at":1660906235000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"Hi everyone! I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to \"split\" these large datasets into chunks with equal size. Even better - be able to run through these chunks one by one in simple and convenient way. So I decided to add the method called shard_generator() to the main Dataset class. It works similar to shard method but it returns a generator of datasets with equal size (defined by shard_size attribute).\r\nExample:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> ds = load_dataset(\"rotten_tomatoes\", split=\"validation\")\r\n>>> ds\r\nDataset({\r\n features: ['text', 'label'],\r\n num_rows: 1066\r\n})\r\n>>> next(ds.shard_generator(300))\r\nDataset({\r\n features: ['text', 'label'],\r\n num_rows: 300\r\n})\r\n```\r\nI hope it can be helpful to someone. Thanks!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4798\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4798\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4798","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4798","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4798.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4798.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4797","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4797\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4797\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4797\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4797","id":1330000998,"node_id":"PR_kwDODunzps48uL-t","number":4797,"title":"Torgo dataset creation","user":{"login":"YingLi001","id":75192317,"node_id":"MDQ6VXNlcjc1MTkyMzE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75192317?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YingLi001","html_url":"https:\/\/github.com\/YingLi001","followers_url":"https:\/\/api.github.com\/users\/YingLi001\/followers","following_url":"https:\/\/api.github.com\/users\/YingLi001\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YingLi001\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YingLi001\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YingLi001\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YingLi001\/orgs","repos_url":"https:\/\/api.github.com\/users\/YingLi001\/repos","events_url":"https:\/\/api.github.com\/users\/YingLi001\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YingLi001\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @YingLi001, thanks for your proposal to add this dataset.\r\n\r\nHowever, now we add datasets directly to the Hub (instead of our GitHub repository). You have the instructions in our docs: \r\n- [Create a dataset loading script](https:\/\/huggingface.co\/docs\/datasets\/dataset_script)\r\n- [Create a dataset card](https:\/\/huggingface.co\/docs\/datasets\/dataset_card)\r\n- [Share](https:\/\/huggingface.co\/docs\/datasets\/share)\r\n\r\nFeel free to ask if you need any additional support\/help."],"created_at":1659709106000,"updated_at":1660070760000,"closed_at":1660070760000,"author_association":"NONE","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4797\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4797\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4797","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4797","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4797.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4797.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4796","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4796\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4796\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4796\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4796","id":1329887810,"node_id":"I_kwDODunzps5PRHpC","number":4796,"title":"ArrowInvalid: Could not convert with type JpegImageFile: did not recognize Python value type when inferring an Arrow data type', 'Conversion failed for column b with type object')\r\n```\r\n\r\nWill the PR linked above also fix that?","I would expect this to work, but it doesn't. Shouldn't be too hard to fix tho (in a subsequent PR).","Hi @mariosasko just wanted to check in if there is a PR to follow for this. I was looking to create a demo app using this. If it's not working I can just use byte encoded images in the dataset which are not displayed. ","Hi @darraghdog! No PR yet, but I plan to fix this before the next release."],"created_at":1659703279000,"updated_at":1660912890000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nWhen adding a Pillow image to an existing Dataset on the hub, `add_item` fails due to the Pillow image not being automatically converted into the Image feature.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nfrom PIL import Image\r\n\r\ndataset = load_dataset(\"hf-internal-testing\/example-documents\")\r\n\r\n# load any random Pillow image\r\nimage = Image.open(\"\/content\/cord_example.png\").convert(\"RGB\")\r\n\r\nnew_image = {'image': image}\r\ndataset['test'] = dataset['test'].add_item(new_image)\r\n```\r\n\r\n## Expected results\r\nThe image should be automatically casted to the Image feature when using `add_item`. For now, this can be fixed by using `encode_example`:\r\n\r\n```\r\nimport datasets\r\n\r\nfeature = datasets.Image(decode=False)\r\nnew_image = {'image': feature.encode_example(image)}\r\ndataset['test'] = dataset['test'].add_item(new_image)\r\n```\r\n\r\n## Actual results\r\n\r\n```\r\nArrowInvalid: Could not convert with type Image: did not recognize Python value type when inferring an Arrow data type\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4796\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4796\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4795","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4795\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4795\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4795\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4795","id":1329525732,"node_id":"I_kwDODunzps5PPvPk","number":4795,"title":"Missing MBPP splits","user":{"login":"stadlerb","id":2452384,"node_id":"MDQ6VXNlcjI0NTIzODQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2452384?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stadlerb","html_url":"https:\/\/github.com\/stadlerb","followers_url":"https:\/\/api.github.com\/users\/stadlerb\/followers","following_url":"https:\/\/api.github.com\/users\/stadlerb\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stadlerb\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stadlerb\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stadlerb\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stadlerb\/orgs","repos_url":"https:\/\/api.github.com\/users\/stadlerb\/repos","events_url":"https:\/\/api.github.com\/users\/stadlerb\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stadlerb\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting this as well, @stadlerb.\r\n\r\nI suggest waiting for the answer of the data owners... ","@albertvillanova The first author of the paper responded to the upstream issue:\r\n> Task IDs 11-510 are the 500 test problems. We use 90 problems (511-600) for validation and then remaining 374 for fine-tuning (601-974). The other problems can be used as desired, either for training or few-shot prompting (although this should be specified).","Thanks for the follow-up, @stadlerb.\r\n\r\nWould you be willing to open a Pull Request to address this issue? :wink: ","Opened a [PR](https:\/\/github.com\/huggingface\/datasets\/pull\/4943) to implement this--lmk if you have any feedback"],"created_at":1659682261000,"updated_at":1663072044000,"closed_at":1663072044000,"author_association":"NONE","active_lock_reason":null,"body":"(@albertvillanova)\r\nThe [MBPP dataset on the Hub](https:\/\/huggingface.co\/datasets\/mbpp) has only a test split for both its \"full\" and its \"sanitized\" subset, while the [paper](https:\/\/arxiv.org\/abs\/2108.07732) states in subsection 2.1 regarding the full split:\r\n> In the experiments described later in the paper, we hold out 10 problems for **few-shot prompting**, another 500 as our **test** dataset (which is used to evaluate both few-shot inference and fine-tuned models), 374 problems for **fine-tuning**, and the rest for **validation**.\r\n\r\nIf the dataset on the Hub should reproduce most closely what the original authors use, I guess this four-way split should be reflected. \r\n\r\nThe paper doesn't explicitly state the task_id ranges of the splits, but the [GitHub readme](https:\/\/github.com\/google-research\/google-research\/tree\/master\/mbpp) referenced in the paper specifies exact task_id ranges, although it misstates the total number of samples:\r\n> We specify a train and test split to use for evaluation. Specifically:\r\n> \r\n> * Task IDs 11-510 are used for evaluation.\r\n> * Task IDs 1-10 and 511-1000 are used for training and\/or prompting. We typically used 1-10 for few-shot prompting, although you can feel free to use any of the training examples.\r\n\r\nI.e. the few-shot, train and validation splits are combined into one split, with a soft suggestion of using the first ten for few-shot prompting. It is not explicitly stated whether the 374 fine-tuning samples mentioned in the paper have task_id 511 to 784 or 601 to 974 or are randomly sampled from task_id 511 to 974.\r\n\r\nRegarding the \"sanitized\" split the paper states the following:\r\n> For evaluations involving the edited dataset, we perform comparisons with 100 problems that appear in both the original and edited dataset, using the same held out 10 problems for few-shot prompting and 374 problems for fine-tuning. \r\n\r\nThe statement doesn't appear to be very precise, as among the 10 few-shot problems, those with task_id 1, 5 and 10 are not even part of the sanitized variant, and many from the task_id range from 511 to 974 are missing (e.g. task_id 511 to 553). I suppose the idea the task_id ranges for each split remain the same, even if some of the task_ids are not present. That would result in 7 few-shot, 257 test, 141 train and 22 validation examples in the sanitized split.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4795\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4795\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4792","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4792\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4792\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4792\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4792","id":1328593929,"node_id":"I_kwDODunzps5PMLwJ","number":4792,"title":"Add DocVQA","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for proposing, @NielsRogge.\r\n\r\nPlease, note this dataset requires registering in their website and their Terms and Conditions state we cannot distribute their URL:\r\n```\r\n1. You will NOT distribute the download URLs\r\n...\r\n```"],"created_at":1659618446000,"updated_at":1659936680000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** DocVQA\r\n- **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a \u201cpurpose-driven\u201d point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this information. \r\n- **Paper:** https:\/\/arxiv.org\/abs\/2007.00398\r\n- **Data:** https:\/\/www.docvqa.org\/datasets\/docvqa\r\n- **Motivation:** Models like LayoutLM and Donut in the Transformers library are fine-tuned on DocVQA. Would be very handy to directly load this dataset from the hub.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/ADD_NEW_DATASET.md).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4792\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4792\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4791","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4791\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4791\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4791\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4791","id":1328571064,"node_id":"I_kwDODunzps5PMGK4","number":4791,"title":"Dataset Viewer issue for Team-PIXEL\/rendered-wikipedia-english","user":{"login":"xplip","id":25847814,"node_id":"MDQ6VXNlcjI1ODQ3ODE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25847814?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xplip","html_url":"https:\/\/github.com\/xplip","followers_url":"https:\/\/api.github.com\/users\/xplip\/followers","following_url":"https:\/\/api.github.com\/users\/xplip\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xplip\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xplip\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xplip\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xplip\/orgs","repos_url":"https:\/\/api.github.com\/users\/xplip\/repos","events_url":"https:\/\/api.github.com\/users\/xplip\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xplip\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting. It's a known issue that should be fixed soon. Meanwhile, I had to manually trigger the dataset viewer. It's OK now.\r\nNote that the extreme aspect ratio of the images generates another issue, that we're inspecting."],"created_at":1659617356000,"updated_at":1659620596000,"closed_at":1659620596000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/Team-PIXEL\/rendered-wikipedia-english\/viewer\/rendered-wikipedia-en\/train\n\n### Description\n\nThe dataset can be loaded fine but the viewer shows this error:\r\n\r\n```\r\nServer Error\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The dataset does not exist.\r\n```\r\n\r\nI'm guessing this is because I recently renamed the dataset. Based on related issues (e.g. https:\/\/github.com\/huggingface\/datasets\/issues\/4759) , is there something server-side that needs to be refreshed?\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4791\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4791\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4790","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4790\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4790\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4790\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4790","id":1328546904,"node_id":"I_kwDODunzps5PMARY","number":4790,"title":"Issue with fine classes in trec dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1659616131000,"updated_at":1661184856000,"closed_at":1661184856000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nAccording to their paper, the TREC dataset contains 2 kinds of classes:\r\n- 6 coarse classes: TREC-6\r\n- 50 fine classes: TREC-50\r\n\r\nHowever, our implementation only has 47 (instead of 50) fine classes. The reason for this is that we only considered the last segment of the label, which is repeated for several coarse classes:\r\n- We have one `desc` fine label instead of 2:\r\n - `DESC:desc`\r\n - `HUM:desc`\r\n- We have one `other` fine label instead of 3:\r\n - `ENTY:other`\r\n - `LOC:other`\r\n - `NUM:other`\r\n\r\nFrom their paper:\r\n> We define a two-layered taxonomy, which represents a natural semantic classification for typical answers in the TREC task. The hierarchy contains 6 coarse classes and 50 fine classes,\r\n\r\n> Each coarse class contains a non-overlapping set of fine classes.\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4790\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4790\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4789","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4789\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4789\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4789\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4789","id":1328409253,"node_id":"PR_kwDODunzps48o3Kk","number":4789,"title":"Update doc upload_dataset.mdx","user":{"login":"mishig25","id":11827707,"node_id":"MDQ6VXNlcjExODI3NzA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11827707?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mishig25","html_url":"https:\/\/github.com\/mishig25","followers_url":"https:\/\/api.github.com\/users\/mishig25\/followers","following_url":"https:\/\/api.github.com\/users\/mishig25\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mishig25\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mishig25\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mishig25\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mishig25\/orgs","repos_url":"https:\/\/api.github.com\/users\/mishig25\/repos","events_url":"https:\/\/api.github.com\/users\/mishig25\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mishig25\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659608640000,"updated_at":1662741430000,"closed_at":1662741298000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4789\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4789\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4789","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4789","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4789.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4789.patch","merged_at":1662741298000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4788","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4788\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4788\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4788\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4788","id":1328246021,"node_id":"PR_kwDODunzps48oUNx","number":4788,"title":"Fix NonMatchingChecksumError in mbpp dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thank you for the quick response! Before noticing that you already had implemented the fix, I already had implemened my own version. I'd also suggest bumping the major version because the contents of the dataset changed, even if only slightly.\r\nI'll attach my version of the affected files: [mbpp-checksum-changes.zip](https:\/\/github.com\/huggingface\/datasets\/files\/9258161\/mbpp-checksum-changes.zip).","Hi @stadlerb, thanks for your feedback.\r\n\r\nWe normally update the major version whenever there is a new dataset release, usually with a breaking change in schema. The patch version is updated whenever there is a small correction in the dataset that does not change its schema.\r\n\r\nAs a side note for future contributions, please note that this dataset is hosted in our library GitHub repository. Therefore, the PRs to GitHub-hosted datasets needs being done through GitHub.\r\n\r\nCurrently added datasets are hosted on the Hub and for them, PRs can be done through the Hub.","I just noticed another problem with the dataset: The [GitHub page](https:\/\/github.com\/google-research\/google-research\/tree\/master\/mbpp) and the [paper](http:\/\/arxiv.org\/abs\/2108.07732) mention a train-test split, which is not reflected in the dataloader. I'll open a new issue regarding this later."],"created_at":1659601060000,"updated_at":1659634440000,"closed_at":1659633661000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix issue reported on the Hub: https:\/\/huggingface.co\/datasets\/mbpp\/discussions\/1\r\n\r\nFix #4787. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4788\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4788\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4788","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4788","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4788.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4788.patch","merged_at":1659633661000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4787","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4787\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4787\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4787\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4787","id":1328243911,"node_id":"I_kwDODunzps5PK2TH","number":4787,"title":"NonMatchingChecksumError in mbpp dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1659600951000,"updated_at":1659633661000,"closed_at":1659633661000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nAs reported on the Hub [Fix Checksum Mismatch](https:\/\/huggingface.co\/datasets\/mbpp\/discussions\/1), there is a `NonMatchingChecksumError` when loading mbpp dataset\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nds = load_dataset(\"mbpp\", \"full\")\r\n```\r\n\r\n## Expected results\r\nLoading of the dataset without any exception raised.\r\n\r\n## Actual results\r\n```\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n in \r\n----> 1 ds = load_dataset(\"mbpp\", \"full\")\r\n\r\n...\/huggingface\/datasets\/src\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1791 \r\n 1792 # Download and prepare data\r\n-> 1793 builder_instance.download_and_prepare(\r\n 1794 download_config=download_config,\r\n 1795 download_mode=download_mode,\r\n...\/huggingface\/datasets\/src\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 702 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 703 if not downloaded_from_gcs:\r\n--> 704 self._download_and_prepare(\r\n 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 706 )\r\n\r\n...\/huggingface\/datasets\/src\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1225 \r\n 1226 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1228 \r\n 1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n...\/huggingface\/datasets\/src\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 773 # Checksums verification\r\n 774 if verify_infos and dl_manager.record_checksums:\r\n--> 775 verify_checksums(\r\n 776 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 777 )\r\n\r\n...\/huggingface\/datasets\/src\/datasets\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/raw.githubusercontent.com\/google-research\/google-research\/master\/mbpp\/mbpp.jsonl']\r\n```\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4787\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4787\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4786","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4786\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4786\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4786\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4786","id":1327340828,"node_id":"I_kwDODunzps5PHZ0c","number":4786,"title":".save_to_disk('path', fs=s3) TypeError ","user":{"login":"hongknop","id":110547763,"node_id":"U_kgDOBpbTMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/110547763?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hongknop","html_url":"https:\/\/github.com\/hongknop","followers_url":"https:\/\/api.github.com\/users\/hongknop\/followers","following_url":"https:\/\/api.github.com\/users\/hongknop\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hongknop\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hongknop\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hongknop\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hongknop\/orgs","repos_url":"https:\/\/api.github.com\/users\/hongknop\/repos","events_url":"https:\/\/api.github.com\/users\/hongknop\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hongknop\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1659538169000,"updated_at":1659540180000,"closed_at":1659540180000,"author_association":"NONE","active_lock_reason":null,"body":"The following code:\r\n```python\r\nimport datasets\r\n\r\ntrain_dataset, test_dataset = load_dataset(\"imdb\", split=[\"train\", \"test\"])\r\ns3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)\r\ntrain_dataset.save_to_disk(\"s3:\/\/datasets\/\", fs=s3)\r\n\r\n```\r\nproduces following traceback:\r\n\r\n```shell\r\nFile \"C:\\Users\\Hong Knop\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\botocore\\auth.py\", line 374, in scope\r\n return '\/'.join(scope)\r\n\r\n```\r\nI invoke print(scope) in (line 373) and find this:\r\n\r\n```python\r\n[('4VA08VLL3VTKQJKCAI8M',), '20220803', 'us-east-1', 's3', 'aws4_request']\r\n\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4786\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4786\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4785","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4785\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4785\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4785\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4785","id":1327225826,"node_id":"PR_kwDODunzps48k8y4","number":4785,"title":"Require torchaudio<0.12.0 in docs","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659533520000,"updated_at":1659539263000,"closed_at":1659538336000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds to docs the requirement of torchaudio<0.12.0 to avoid RuntimeError.\r\n\r\nSubsequent to PR:\r\n- #4777","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4785\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4785\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4785","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4785","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4785.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4785.patch","merged_at":1659538336000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4784","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4784\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4784\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4784\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4784","id":1326395280,"node_id":"I_kwDODunzps5PDy-Q","number":4784,"title":"Add Multiface dataset","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":3608941089,"node_id":"LA_kwDODunzps7XHBIh","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/vision","name":"vision","color":"bfdadc","default":false,"description":"Vision datasets"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @osanseviero I would like to add this dataset.","Hey @nandwalritik! Thanks for offering to help!\r\n\r\nThis dataset might be somewhat complex and I'm concerned about it being 65 TB, which would be quite expensive to host. @lhoestq @mariosasko I would love your input if you think it's worth adding this dataset.","Thanks for proposing this interesting dataset, @osanseviero.\r\n\r\nPlease note that the data files are already hosted in a third-party server: e.g. the index of data files for entity \"6795937\" is at https:\/\/fb-baas-f32eacb9-8abb-11eb-b2b8-4857dd089e15.s3.amazonaws.com\/MugsyDataRelease\/v0.0\/identities\/6795937\/index.html \r\n- audio.tar: https:\/\/fb-baas-f32eacb9-8abb-11eb-b2b8-4857dd089e15.s3.amazonaws.com\/MugsyDataRelease\/v0.0\/identities\/6795937\/audio.tar\r\n- ...\r\n\r\nTherefore, in principle, we don't need to host them on our Hub: it would be enough to just implement a loading script in the corresponding Hub dataset repo, e.g. \"facebook\/multiface\"..."],"created_at":1659474022000,"updated_at":1659969756000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** Multiface dataset\r\n- **Description:** f high quality recordings of the faces of 13 identities, each captured in a multi-view capture stage performing various facial expressions. An average of 12,200 (v1 scripts) to 23,000 (v2 scripts) frames per subject with capture rate at 30 fps\r\n- **Data:** https:\/\/github.com\/facebookresearch\/multiface\r\n\r\nThe whole dataset is 65TB though, so I'm not sure\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/ADD_NEW_DATASET.md).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4784\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4784\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4783","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4783\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4783\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4783\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4783","id":1326375011,"node_id":"PR_kwDODunzps48iHey","number":4783,"title":"Docs for creating a loading script for image datasets","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","IMO it would make more sense to add a \"Create image dataset\" page with two main sections - a no-code approach with `imagefolder` + metadata (preferred way), and with a loading script (advanced). It should be clear when to choose which. If we leave this as-is, the user who jumps straight to the Vision section could be under the impression that writing a loading script is the preferred way to share a vision dataset due to how this subsection starts:\r\n```\r\nWrite a dataset loading script to share a dataset.\r\n```\r\n \r\nAlso, I think a note explaining how to make a dataset gated\/disable the viewer to hide the data would be beneficial (it's pretty common to require submitting a form to access a CV dataset).","Great suggestion @mariosasko! I added your suggestions, let me know what you think. For gated dataset access, I just added a tip referring users to the relevant docs since it's more of a Hub feature than `datasets` feature.","Thanks, looks much better now :). I would also move the sections explaining how to create an `imagefolder` for the specific task from the [loading page](https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/main\/docs\/source\/image_load.mdx) to this one. IMO it makes more sense to have the basic info (imagefolder structure + `load_dataset` call) there + a link to this page for info on how to create an image folder dataset.","Good idea! Moved everything about `imagefolder` + metadata to the create an image dataset section since the `load_dataset` call is the same for different computer vision tasks. ","Thanks for all the feedbacks! \ud83e\udd70\r\n\r\nWhat do you think about creating how to share an `ImageFolder` dataset in a separate PR? I think we should create a new section under `Vision` for how to share an image dataset.","I love it thanks ! I think moving forward we can use CSV instead of JSON Lines in the docs ;)"],"created_at":1659472563000,"updated_at":1662743294000,"closed_at":1662577654000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR is a first draft of creating a loading script for image datasets. Feel free to let me know if there are any specificities I'm missing for this. \ud83d\ude42 \r\n\r\nTo do:\r\n- [x] Document how to create different configurations.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4783\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4783\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4783","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4783","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4783.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4783.patch","merged_at":1662577654000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4782","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4782\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4782\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4782\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4782","id":1326247158,"node_id":"I_kwDODunzps5PDOz2","number":4782,"title":"pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648","user":{"login":"conceptofmind","id":25208228,"node_id":"MDQ6VXNlcjI1MjA4MjI4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25208228?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/conceptofmind","html_url":"https:\/\/github.com\/conceptofmind","followers_url":"https:\/\/api.github.com\/users\/conceptofmind\/followers","following_url":"https:\/\/api.github.com\/users\/conceptofmind\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/conceptofmind\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/conceptofmind\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/conceptofmind\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/conceptofmind\/orgs","repos_url":"https:\/\/api.github.com\/users\/conceptofmind\/repos","events_url":"https:\/\/api.github.com\/users\/conceptofmind\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/conceptofmind\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting @conceptofmind.\r\n\r\nCould you please give details about your environment? \r\n```\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```","Hi @albertvillanova ,\r\n\r\nHere is the environment information:\r\n```\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.27\r\n- Python version: 3.9.12\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2\r\n```\r\nThanks,\r\n\r\nEnrico","I think this issue is solved here https:\/\/discuss.huggingface.co\/t\/minhash-deduplication\/19992\/12?u=loubnabnl, this only happens for very large datasets we will update it in CodeParrot code","Hi @loubnabnl,\r\n\r\nYes, the issue is solved in the discussion thread.\r\n\r\nI will close this issue.\r\n\r\nThank you again for all of your help.\r\n\r\nEnrico","Thanks @loubnabnl for pointing out the solution to this issue."],"created_at":1659465365000,"updated_at":1661161588000,"closed_at":1660961513000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nFollowing the example in CodeParrot, I receive an array size limitation error when deduplicating larger datasets.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndataset_name = \"the_pile\"\r\nds = load_dataset(dataset_name, split=\"train\")\r\nds = ds.map(preprocess, num_proc=num_workers)\r\nuniques = set(ds.unique(\"hash\"))\r\n```\r\nGists for minimum reproducible example:\r\nhttps:\/\/gist.github.com\/conceptofmind\/c5804428ea1bd89767815f9cd5f02d9a\r\nhttps:\/\/gist.github.com\/conceptofmind\/feafb07e236f28d79c2d4b28ffbdb6e2\r\n\r\n## Expected results\r\nChunking and writing out a deduplicated dataset. \r\n\r\n## Actual results\r\n```\r\nreturn dataset._data.column(column).unique().to_pylist()\r\nFile \"pyarrow\/table.pxi\", line 394, in pyarrow.lib.ChunkedArray.unique\r\nFile \"pyarrow\/_compute.pyx\", line 531, in pyarrow._compute.call_function\r\nFile \"pyarrow\/_compute.pyx\", line 330, in pyarrow._compute.Function.call\r\nFile \"pyarrow\/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\nFile \"pyarrow\/error.pxi\", line 124, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4782\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4782\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4781","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4781\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4781\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4781\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4781","id":1326114161,"node_id":"PR_kwDODunzps48hOie","number":4781,"title":"Fix label renaming and add a battery of tests","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Why don't we deprecate label renaming already instead ?","I think it'll break a lot of workflows if we deprecate it now! There isn't really a non-deprecated workflow yet - once we've added the `auto_rename_labels` option, then we can have `prepare_tf_dataset` on the `transformers` side use that, and then we can consider setting the default option to `False`, or beginning to deprecate it somehow.","I'm worried it's a bit of a waste of time to continue working on this behavior that shouldn't be here in the first place. Do you have a plan in mind ?","@lhoestq Broadly! The plan is:\r\n\r\n1) Create the `auto_rename_labels` flag with this PR and skip label renaming if it isn't set. Leave it as `True` for backward compatibility.\r\n2) Add the label renaming logic to `model.prepare_tf_dataset` in `transformers`. That method calls `to_tf_dataset()` right now. Once the label renaming logic is moved there, `model.prepare_tf_dataset` will set `auto_rename_labels=False` when calling `to_tf_dataset()`, and do label renaming itself.\r\n\r\nAfter step 2, `auto_rename_labels` is now only necessary for backward compatibility when users use `to_tf_dataset` directly. I want to leave it alone for a while because the `model.prepare_tf_dataset` workflow is very new. However, once it is established, we can deprecate `auto_rename_labels` and then finally remove it from the `datasets` code and keep it in `transformers` where it belongs.","I see ! Could it be possible to not add `auto_rename_labels` at all, since you want to remove it at the end ? Something roughly like this:\r\n1. show a warning in `to_tf_dataset` whevener a label is renamed automatically, saying that in the next major release this will be removed\r\n1. add the label renaming logic in `transformers` (to not have the warning)\r\n1. after some time, do a major release 3.0.0 and remove label renaming completely in `to_tf_dataset`\r\n\r\nWhat do you think ? cc @LysandreJik in case you have an opinion on this process.","@lhoestq I think that plan is mostly good, but if we make the change to `datasets` first then all users will keep getting deprecation warnings until we update the method in `transformers` and release a new version. \r\n\r\nI think we can follow your plan, but make the change to `transformers` first and wait for a new release before changing `datasets` - that way there are no visible warnings or API changes for users using `prepare_tf_dataset`. It also gives us more time to update the docs and try to move people to `prepare_tf_dataset` so they aren't confused by this!","Sounds good to me ! To summarize:\r\n1. add the label renaming logic in `transformers` + release\r\n1. show a warning in `to_tf_dataset` whevener a label is renamed automatically, saying that in the next major release this will be removed + minor release\r\n1. after some time, do a major release 3.0.0 and remove label renaming completely in `to_tf_dataset`","Yep, that's the plan! ","@lhoestq Are you okay with me merging this for now? ","Can you remove `auto_rename_labels` ? I don't think it's a good idea to add it if the plan is to remove it later","Right now, the `auto_rename_labels` behaviour happens in all cases! Making it an option is the first step in the process of disabling it (and moving the functionality to `transformers`) and then finally deprecating it."],"created_at":1659458527000,"updated_at":1662982026000,"closed_at":1662981885000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR makes some changes to label renaming in `to_tf_dataset()`, both to fix some issues when users input something we weren't expecting, and also to make it easier to deprecate label renaming in future, if\/when we want to move this special-casing logic to a function in `transformers`.\r\n\r\nThe main changes are:\r\n- Label renaming now only happens when the `auto_rename_labels` argument is set. For backward compatibility, this defaults to `True` for now.\r\n- If the user requests \"label\" but the data collator renames that column to \"labels\", the label renaming logic will now handle that case correctly.\r\n- Added a battery of tests to make this more reliable in future.\r\n- Adds an optimization to loading in `to_tf_dataset()` for unshuffled datasets (uses slicing instead of a list of indices)\r\n\r\nFixes #4772","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4781\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4781\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4781","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4781","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4781.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4781.patch","merged_at":1662981885000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4780","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4780\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4780\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4780\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4780","id":1326034767,"node_id":"PR_kwDODunzps48g9oA","number":4780,"title":"Remove apache_beam import from module level in natural_questions dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659454494000,"updated_at":1659456993000,"closed_at":1659456197000,"author_association":"MEMBER","active_lock_reason":null,"body":"Instead of importing `apache_beam` at the module level, import it in the method `_build_pcollection`.\r\n\r\nFix #4779.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4780\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4780\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4780","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4780","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4780.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4780.patch","merged_at":1659456197000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4779","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4779\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4779\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4779\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4779","id":1325997225,"node_id":"I_kwDODunzps5PCRyp","number":4779,"title":"Loading natural_questions requires apache_beam even with existing preprocessed data","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1659452817000,"updated_at":1659456198000,"closed_at":1659456198000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nWhen loading \"natural_questions\", the package \"apache_beam\" is required:\r\n```\r\nImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.\r\nPlease install it using 'pip install apache_beam' for instance'\r\n```\r\n\r\nThis requirement is unnecessary, once there exists preprocessed data and the script just needs to download it.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nload_dataset(\"natural_questions\", \"dev\", split=\"validation\", revision=\"main\")\r\n```\r\n\r\n## Expected results\r\nNo ImportError raised.\r\n\r\n## Actual results\r\n```\r\nImportError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in ()\r\n----> 1 from datasets import load_dataset; ds = load_dataset(\"natural_questions\", \"dev\", split=\"validation\", revision=\"main\")\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1732 revision=revision,\r\n 1733 use_auth_token=use_auth_token,\r\n-> 1734 **config_kwargs,\r\n 1735 )\r\n 1736 \r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1504 download_mode=download_mode,\r\n 1505 data_dir=data_dir,\r\n-> 1506 data_files=data_files,\r\n 1507 )\r\n 1508 \r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1245 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1246 ) from None\r\n-> 1247 raise e1 from None\r\n 1248 else:\r\n 1249 raise FileNotFoundError(\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1180 download_config=download_config,\r\n 1181 download_mode=download_mode,\r\n-> 1182 dynamic_modules_path=dynamic_modules_path,\r\n 1183 ).get_module()\r\n 1184 elif path.count(\"\/\") == 1: # community dataset on the Hub\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in get_module(self)\r\n 490 base_path=hf_github_url(path=self.name, name=\"\", revision=revision),\r\n 491 imports=imports,\r\n--> 492 download_config=self.download_config,\r\n 493 )\r\n 494 additional_files = [(config.DATASETDICT_INFOS_FILENAME, dataset_infos_path)] if dataset_infos_path else []\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py](https:\/\/localhost:8080\/#) in _download_additional_modules(name, base_path, imports, download_config)\r\n 214 _them_str = \"them\" if len(needs_to_be_installed) > 1 else \"it\"\r\n 215 raise ImportError(\r\n--> 216 f\"To be able to use {name}, you need to install the following {_depencencies_str}: \"\r\n 217 f\"{', '.join(needs_to_be_installed)}.\\nPlease install {_them_str} using 'pip install \"\r\n 218 f\"{' '.join(needs_to_be_installed.values())}' for instance'\"\r\n\r\nImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.\r\nPlease install it using 'pip install apache_beam' for instance'\r\n```\r\n\r\n## Environment info\r\nColab notebook.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4779\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4779\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4778","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4778\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4778\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4778\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4778","id":1324928750,"node_id":"PR_kwDODunzps48dRPh","number":4778,"title":"Update local loading script docs","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4778). All of your documentation changes will be reflected on that endpoint.","I would rather have a section in the docs that explains how to modify the script of an existing dataset (`inspect_dataset` + modification + `load_dataset`) instead of focusing on the GH datasets bundled with the source (only applicable for devs).","Good idea! I went with @mariosasko's suggestion to use `inspect_dataset` instead of cloning a dataset repository since it's a good opportunity to show off more of the library's lesser-known functions if that's ok with everyone :)","One advantage of cloning the repo is that it fetches potential data files referenced inside a script using relative paths, so if we decide to use `inspect_dataset`, we should at least add a tip to explain this limitation and how to circumvent it.","Oh you're right. Calling `load_dataset` on the modified script without having the files that come with it is not ideal. I agree it should be `git clone` instead - and inspect is for inspection only ^^'"],"created_at":1659385267000,"updated_at":1661272346000,"closed_at":1661272342000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR clarifies the local loading script section to include how to load a dataset after you've modified the local loading script (closes #4732).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4778\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4778\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4778","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4778","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4778.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4778.patch","merged_at":1661272342000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4777","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4777\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4777\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4777\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4777","id":1324548784,"node_id":"PR_kwDODunzps48cByL","number":4777,"title":"Require torchaudio<0.12.0 to avoid RuntimeError","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659365450000,"updated_at":1659461714000,"closed_at":1659460899000,"author_association":"MEMBER","active_lock_reason":null,"body":"Related to:\r\n- https:\/\/github.com\/huggingface\/transformers\/issues\/18379\r\n\r\nFix partially #4776. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4777\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4777\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4777","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4777","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4777.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4777.patch","merged_at":1659460899000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4776","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4776\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4776\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4776\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4776","id":1324493860,"node_id":"I_kwDODunzps5O8iwk","number":4776,"title":"RuntimeError when using torchaudio 0.12.0 to load MP3 audio file","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Requiring torchaudio<0.12.0 isn't really a viable solution because that implies torch<0.12.0 which means no sm_86 CUDA support which means no RTX 3090 support in PyTorch.\r\n\r\nBut in my case, the error only occurs if `_fallback_load` resolves to `_fail_load` inside torchaudio 0.12.0 which is only the case if FFMPEG initialization failed: https:\/\/github.com\/pytorch\/audio\/blob\/b1f510fa5681e92ee82bdc6b2d1ed896799fc32c\/torchaudio\/backend\/sox_io_backend.py#L36-L47\r\n\r\nThat means the proper solution for torchaudio>=0.12.0 is to check `torchaudio._extension._FFMPEG_INITIALIZED` and if it is False, then we need to remind the user to install a dynamically linked ffmpeg 4.1.8 and then maybe call `torchaudio._extension._init_ffmpeg()` to force a user-visible exception showing the missing ffmpeg dynamic library name.\r\n\r\nOn my system, installing \r\n\r\n- libavcodec.so.58 \r\n- libavdevice.so.58 \r\n- libavfilter.so.7 \r\n- libavformat.so.58 \r\n- libavutil.so.56 \r\n- libswresample.so.3 \r\n- libswscale.so.5\r\n\r\nfrom ffmpeg 4.1.8 made HF datasets 2.3.2 work just fine with torchaudio 0.12.1+cu116:\r\n\r\n```python3\r\nimport sox, torchaudio, datasets\r\nprint('torchaudio', torchaudio.__version__)\r\nprint('datasets', datasets.__version__)\r\ntorchaudio._extension._init_ffmpeg()\r\nprint(torchaudio._extension._FFMPEG_INITIALIZED)\r\nwaveform, sample_rate = torchaudio.load('\/workspace\/.cache\/huggingface\/datasets\/downloads\/extracted\/8e5aa88585efa2a4c74c6664b576550d32b7ff9c3d1d17cc04f44f11338c3dc6\/cv-corpus-8.0-2022-01-19\/en\/clips\/common_voice_en_100038.mp3', format='mp3')\r\nprint(waveform.shape)\r\n```\r\n\r\n```\r\ntorchaudio 0.12.1+cu116\r\ndatasets 2.3.2\r\nTrue\r\ntorch.Size([1, 369792])\r\n```","Related: https:\/\/github.com\/huggingface\/datasets\/issues\/4889"],"created_at":1659363083000,"updated_at":1661360143000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Current version of `torchaudio` (0.12.0) raises a RuntimeError when trying to use `sox_io` backend but non-Python dependency `sox` is not installed:\r\nhttps:\/\/github.com\/pytorch\/audio\/blob\/2e1388401c434011e9f044b40bc8374f2ddfc414\/torchaudio\/backend\/sox_io_backend.py#L21-L29\r\n```python\r\ndef _fail_load(\r\n filepath: str,\r\n frame_offset: int = 0,\r\n num_frames: int = -1,\r\n normalize: bool = True,\r\n channels_first: bool = True,\r\n format: Optional[str] = None,\r\n) -> Tuple[torch.Tensor, int]:\r\n raise RuntimeError(\"Failed to load audio from {}\".format(filepath))\r\n```\r\n\r\nMaybe we should raise a more actionable error message so that the user knows how to fix it.\r\n\r\nUPDATE:\r\n- this is an incompatibility of latest torchaudio (0.12.0) and the sox backend\r\n\r\nTODO:\r\n- [x] as a temporary solution, we should recommend installing torchaudio<0.12.0\r\n - #4777 \r\n - #4785\r\n- [ ] however, a stable solution must be found for torchaudio>=0.12.0\r\n\r\nRelated to: \r\n- https:\/\/github.com\/huggingface\/transformers\/issues\/18379","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4776\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4776\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4775","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4775\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4775\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4775\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4775","id":1324136486,"node_id":"I_kwDODunzps5O7Lgm","number":4775,"title":"Streaming not supported in Theivaprakasham\/wildreceipt","user":{"login":"NitishkKarra","id":100361173,"node_id":"U_kgDOBftj1Q","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/100361173?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NitishkKarra","html_url":"https:\/\/github.com\/NitishkKarra","followers_url":"https:\/\/api.github.com\/users\/NitishkKarra\/followers","following_url":"https:\/\/api.github.com\/users\/NitishkKarra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NitishkKarra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NitishkKarra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NitishkKarra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NitishkKarra\/orgs","repos_url":"https:\/\/api.github.com\/users\/NitishkKarra\/repos","events_url":"https:\/\/api.github.com\/users\/NitishkKarra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NitishkKarra\/received_events","type":"User","site_admin":false},"labels":[{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @NitishkKarra.\r\n\r\nThe root source of the issue is that streaming mode is not supported out-of-the-box for that dataset, because it contains a TAR file.\r\n\r\nWe have opened a discussion in the corresponding Hub dataset page, pointing out this issue: https:\/\/huggingface.co\/datasets\/Theivaprakasham\/wildreceipt\/discussions\/1\r\n\r\nI'm closing this issue here, so this discussion is transferred there instead."],"created_at":1659347177000,"updated_at":1659349829000,"closed_at":1659349829000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4775\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4775\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4774","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4774\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4774\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4774\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4774","id":1323375844,"node_id":"I_kwDODunzps5O4Rzk","number":4774,"title":"Training hangs at the end of epoch, with set_transform\/with_transform+multiple workers","user":{"login":"memray","id":4197249,"node_id":"MDQ6VXNlcjQxOTcyNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4197249?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/memray","html_url":"https:\/\/github.com\/memray","followers_url":"https:\/\/api.github.com\/users\/memray\/followers","following_url":"https:\/\/api.github.com\/users\/memray\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/memray\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/memray\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/memray\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/memray\/orgs","repos_url":"https:\/\/api.github.com\/users\/memray\/repos","events_url":"https:\/\/api.github.com\/users\/memray\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/memray\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1659249148000,"updated_at":1659249403000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI use load_dataset() (I tried with [wiki](https:\/\/huggingface.co\/datasets\/wikipedia) and my own json data) and use set_transform\/with_transform for preprocessing. But it hangs at the end of the 1st epoch if dataloader_num_workers>=1. No problem with single worker. \r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ntrain_dataset = datasets.load_dataset(\"wikipedia\", \"20220301.en\",\r\n split='train', \r\n cache_dir=model_args.cache_dir,\r\n streaming=False)\r\ntrain_dataset.set_transform(psg_parse_fn)\r\ntrain_dataloader = DataLoader(\r\n train_dataset,\r\n batch_size=args.train_batch_size,\r\n sampler=DistributedSampler(train_dataset),\r\n collate_fn=data_collator,\r\n drop_last=args.dataloader_drop_last,\r\n num_workers=args.dataloader_num_workers,\r\n )\r\n```\r\n\r\n## Expected results\r\n\r\n\r\n## Actual results\r\nIt simply hangs. The ending step is num_example\/batch_size (one epoch).\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.1.dev0\r\n- Platform: Linux-5.4.170+-x86_64-with-glibc2.17\r\n- Python version: 3.8.12\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4774\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4774\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4773","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4773\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4773\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4773\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4773","id":1322796721,"node_id":"PR_kwDODunzps48WNV3","number":4773,"title":"Document loading from relative path","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks for the feedback!\r\n\r\nI agree that adding it to `load_hub.mdx` is probably a bit too specific, especially for beginners reading the tutorials. Since this clarification is closely related to loading from the Hub (the only difference being the presence\/absence of a loading script), I think it makes the most sense to keep it somewhere in `loading.mdx`. What do you think about adding a Warning in Loading >>> Hugging Face Hub that explains the difference between relative\/absolute paths when there is a script?","What about updating the section about \"manual download\" ? I think it goes there no ?\r\n\r\nhttps:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/loading#manual-download","Updated the manual download section :)","Thanks ! Pinging @albertvillanova to review this change, and then I think we're good to merge"],"created_at":1659137541000,"updated_at":1661452605000,"closed_at":1661452463000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR describes loading a dataset from the Hub by specifying a relative path in `data_dir` or `data_files` in `load_dataset` (see #4757).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4773\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4773\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4773","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4773","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4773.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4773.patch","merged_at":1661452463000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4772","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4772\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4772\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4772\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4772","id":1322693123,"node_id":"I_kwDODunzps5O1rID","number":4772,"title":"AssertionError when using label_cols in to_tf_dataset ","user":{"login":"lehrig","id":9555494,"node_id":"MDQ6VXNlcjk1NTU0OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9555494?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lehrig","html_url":"https:\/\/github.com\/lehrig","followers_url":"https:\/\/api.github.com\/users\/lehrig\/followers","following_url":"https:\/\/api.github.com\/users\/lehrig\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lehrig\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lehrig\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lehrig\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lehrig\/orgs","repos_url":"https:\/\/api.github.com\/users\/lehrig\/repos","events_url":"https:\/\/api.github.com\/users\/lehrig\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lehrig\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @Rocketknight1 ","Hi @lehrig, this is caused by the data collator renaming \"label\" to \"labels\". If you set `label_cols=[\"labels\"]` in the call it will work correctly. However, I agree that the cause of the bug is not obvious, so I'll see if I can make a PR to clarify things when the collator renames columns.","Thanks - and wow, that appears like a strange side-effect of the data collator. Is that really needed?\r\n\r\nWhy not make it more explicit? For example, extend `DefaultDataCollator` with an optional property `label_col_name` to be used as label column; only when it is not provided default to `labels` (and document that this happens) for backwards-compatibility? ","Haha, I honestly have no idea why our data collators rename `\"label\"` (the standard label column name in our datasets) to `\"labels\"` (the standard label column name input to our models). It's been a pain point when I design TF data pipelines, though, because I don't want to hardcode things like that - especially in `datasets`, because the renaming is something that happens purely at the `transformers` end. I don't think I could make the change in the data collators themselves at this point, because it would break backward compatibility for everything in PyTorch as well as TF.\r\n\r\nIn the most recent version of `transformers` we added a [prepare_tf_dataset](https:\/\/huggingface.co\/docs\/transformers\/main_classes\/model#transformers.TFPreTrainedModel.prepare_tf_dataset) method to our models which takes care of these details for you, and even chooses appropriate columns and labels for the model you're using. In future we might make that the officially recommended way to convert HF datasets to `tf.data.Dataset`.","Interesting, that'd be great especially for clarity. https:\/\/huggingface.co\/docs\/datasets\/use_with_tensorflow#data-loading already improved clarity, yet, all those options will still confuse people. Looking forward to those advances in the hope there'll be only 1 way in the future ;)\r\n\r\nAnyways, I am happy for the time being with the work-around you provided. Thank you!"],"created_at":1659130332000,"updated_at":1662981886000,"closed_at":1662981886000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nAn incorrect `AssertionError` is raised when using `label_cols` in `to_tf_dataset` and the label's key name is `label`.\r\n\r\nThe assertion is in this line:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2.4.0\/src\/datasets\/arrow_dataset.py#L475\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nfrom transformers import DefaultDataCollator\r\n\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\n\r\ntf_dataset = dataset.to_tf_dataset(\r\n columns=[\"sentence1\", \"sentence2\", \"idx\"],\r\n label_cols=[\"label\"],\r\n batch_size=16,\r\n collate_fn=DefaultDataCollator(return_tensors=\"tf\"),\r\n)\r\n```\r\n\r\n## Expected results\r\nNo assertion error.\r\n\r\n## Actual results\r\n```\r\nAssertionError: in user code:\r\n\r\n File \"\/opt\/conda\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 475, in split_features_and_labels *\r\n assert set(features.keys()).union(labels.keys()) == set(input_batch.keys())\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17\r\n- Python version: 3.8.13\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4772\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4772\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4771","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4771\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4771\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4771\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4771","id":1322600725,"node_id":"PR_kwDODunzps48VjWx","number":4771,"title":"Remove dummy data generation docs","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659122446000,"updated_at":1659485041000,"closed_at":1659484229000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR removes instructions to generate dummy data since that is no longer necessary for datasets that are uploaded to the Hub instead of our GitHub repo.\r\n\r\nClose #4744","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4771\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4771\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4771","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4771","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4771.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4771.patch","merged_at":1659484229000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4770","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4770\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4770\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4770\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4770","id":1322147855,"node_id":"PR_kwDODunzps48UEBT","number":4770,"title":"fix typo","user":{"login":"xwwwwww","id":48146603,"node_id":"MDQ6VXNlcjQ4MTQ2NjAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48146603?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xwwwwww","html_url":"https:\/\/github.com\/xwwwwww","followers_url":"https:\/\/api.github.com\/users\/xwwwwww\/followers","following_url":"https:\/\/api.github.com\/users\/xwwwwww\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xwwwwww\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xwwwwww\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xwwwwww\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xwwwwww\/orgs","repos_url":"https:\/\/api.github.com\/users\/xwwwwww\/repos","events_url":"https:\/\/api.github.com\/users\/xwwwwww\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xwwwwww\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["good catch thanks ! Can you check if the same typo is also present in `add_elasticsearch_index` ? It has a very similar signature","> good catch thanks ! Can you check if the same typo is also present in `add_elasticsearch_index` ? It has a very similar signature\r\n\r\nfixed"],"created_at":1659095172000,"updated_at":1659110527000,"closed_at":1659110527000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"By defaul -> By default","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4770\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4770\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4770","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4770","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4770.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4770.patch","merged_at":1659110527000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4769","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4769\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4769\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4769\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4769","id":1322121554,"node_id":"I_kwDODunzps5OzflS","number":4769,"title":"Fail to process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96.","user":{"login":"zhuango","id":5491519,"node_id":"MDQ6VXNlcjU0OTE1MTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5491519?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zhuango","html_url":"https:\/\/github.com\/zhuango","followers_url":"https:\/\/api.github.com\/users\/zhuango\/followers","following_url":"https:\/\/api.github.com\/users\/zhuango\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zhuango\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zhuango\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zhuango\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zhuango\/orgs","repos_url":"https:\/\/api.github.com\/users\/zhuango\/repos","events_url":"https:\/\/api.github.com\/users\/zhuango\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zhuango\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1659093504000,"updated_at":1659093504000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\ndatasets fail to process SQuADv1.1 with max_seq_length=128, doc_stride=96 when calling datasets[\"train\"].train_dataset.map().\r\n\r\n## Steps to reproduce the bug\r\n\r\nI used huggingface[ TF2 question-answering examples](https:\/\/github.com\/huggingface\/transformers\/tree\/main\/examples\/tensorflow\/question-answering). And my scripts are as follows:\r\n\r\n```\r\npython run_qa.py \\\r\n --model_name_or_path $BERT_DIR \\\r\n --dataset_name $SQUAD_DIR \\\r\n --do_train \\\r\n --do_eval \\\r\n --per_device_train_batch_size 12 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2 \\\r\n --max_seq_length 128 \\\r\n --doc_stride 96 \\\r\n --output_dir $OUTPUT \\\r\n --save_steps 10000 \\\r\n --overwrite_cache \\\r\n --overwrite_output_dir \\\r\n\r\n```\r\n\r\n## Expected results\r\nNormally process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96.\r\n\r\n## Actual results\r\n```\r\nINFO:__main__:Padding all batches to max length because argument was set or we're on TPU.\r\nWARNING:datasets.fingerprint:Parameter 'function'=.prepare_train_features at 0x7f15bc2d07a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n 0%| | 0\/88 [00:00' panicked at 'assertion failed: stride < max_len', \/__w\/tokenizers\/tokenizers\/tokenizers\/src\/tokenizer\/encoding.rs:311:9\r\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\r\n 0%| | 0\/88 [00:00\r\n main()\r\n File \"run_qa.py\", line 485, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"\/anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2394, in map\r\n desc=desc,\r\n File \"\/anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 551, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 518, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 458, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2768, in _map_single\r\n offset=offset,\r\n File \"anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2644, in apply_function_on_filtered_inputs\r\n processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n File \"anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 2336, in decorated\r\n result = f(decorated_item, *args, **kwargs)\r\n File \"run_qa.py\", line 410, in prepare_train_features\r\n padding=padding,\r\n File \"anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_base.py\", line 2512, in __call__\r\n **kwargs,\r\n File \"anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_base.py\", line 2703, in batch_encode_plus\r\n **kwargs,\r\n File \"anaconda3\/envs\/py37\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_fast.py\", line 429, in _batch_encode_plus\r\n is_pretokenized=is_split_into_words,\r\npyo3_runtime.PanicException: assertion failed: stride < max_len\r\nTraceback (most recent call last):\r\n File \".\/data\/SQuADv1.1\/evaluate-v1.1.py\", line 92, in \r\n with open(args.prediction_file) as prediction_file:\r\nFileNotFoundError: [Errno 2] No such file or directory: '.\/output\/bert_base_squadv1.1_tf2\/eval_predictions.json'\r\n\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Ubuntu, pytorch=1.11.0, tensorflow-gpu=2.9.1\r\n- Python version: 2.7\r\n- PyArrow version: 8.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4769\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4769\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4768","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4768\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4768\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4768\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4768","id":1321913645,"node_id":"PR_kwDODunzps48TRUH","number":4768,"title":"Unpin rouge_score test dependency","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659082660000,"updated_at":1659112948000,"closed_at":1659112157000,"author_association":"MEMBER","active_lock_reason":null,"body":"Once `rouge-score` has made the 0.1.2 release to fix their issue https:\/\/github.com\/google-research\/google-research\/issues\/1212, we can unpin it.\r\n\r\nRelated to:\r\n- #4735 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4768\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4768\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4768","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4768","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4768.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4768.patch","merged_at":1659112157000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4767","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4767\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4767\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4767\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4767","id":1321843538,"node_id":"PR_kwDODunzps48TCpI","number":4767,"title":"Add 2.4.0 version added to docstrings","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659078116000,"updated_at":1659093409000,"closed_at":1659092638000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4767\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4767\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4767","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4767","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4767.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4767.patch","merged_at":1659092638000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4766","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4766\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4766\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4766\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4766","id":1321809380,"node_id":"I_kwDODunzps5OyTXk","number":4766,"title":"Dataset Viewer issue for openclimatefix\/goes-mrms","user":{"login":"cheaterHy","id":101324688,"node_id":"U_kgDOBgoXkA","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/101324688?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cheaterHy","html_url":"https:\/\/github.com\/cheaterHy","followers_url":"https:\/\/api.github.com\/users\/cheaterHy\/followers","following_url":"https:\/\/api.github.com\/users\/cheaterHy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cheaterHy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cheaterHy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cheaterHy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cheaterHy\/orgs","repos_url":"https:\/\/api.github.com\/users\/cheaterHy\/repos","events_url":"https:\/\/api.github.com\/users\/cheaterHy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cheaterHy\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @cheaterHy.\r\n\r\nThe cause of this issue is a misalignment between the names of the repo (`goes-mrms`, with hyphen) and its Python loading scrip file (`goes_mrms.py`, with underscore).\r\n\r\nI've opened an Issue discussion in their repo: https:\/\/huggingface.co\/datasets\/openclimatefix\/goes-mrms\/discussions\/1"],"created_at":1659075434000,"updated_at":1659084238000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4766\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4766\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4765","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4765\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4765\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4765\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4765","id":1321787428,"node_id":"PR_kwDODunzps48S2rM","number":4765,"title":"Fix version in map_nested docstring","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659073472000,"updated_at":1659095485000,"closed_at":1659094716000,"author_association":"MEMBER","active_lock_reason":null,"body":"After latest release, `map_nested` docstring needs being updated with the right version for versionchanged and versionadded.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4765\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4765\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4765","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4765","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4765.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4765.patch","merged_at":1659094716000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4764","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4764\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4764\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4764\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4764","id":1321295961,"node_id":"PR_kwDODunzps48RMLu","number":4764,"title":"Update CI badge","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659031460000,"updated_at":1659094597000,"closed_at":1659093831000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Replace the old CircleCI badge with a new one for GH Actions.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4764\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4764\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4764","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4764","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4764.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4764.patch","merged_at":1659093831000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4763","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4763\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4763\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4763\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4763","id":1321295876,"node_id":"PR_kwDODunzps48RMKi","number":4763,"title":"More rigorous shape inference in to_tf_dataset","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1659031455000,"updated_at":1662664674000,"closed_at":1662664541000,"author_association":"MEMBER","active_lock_reason":null,"body":"`tf.data` needs to know the shape of tensors emitted from a `tf.data.Dataset`. Although `None` dimensions are possible, overusing them can cause problems - Keras uses the dataset tensor spec at compile-time, and so saying that a dimension is `None` when it's actually constant can hurt performance, or even cause training to fail for dimensions that are needed to determine the shape of weight tensors!\r\n\r\nThe compromise I used here was to sample several batches from the underlying dataset and apply the `collate_fn` to them, and then to see which dimensions were \"empirically variable\". There's an obvious problem here, though - if you sample 10 batches and they all have the same shape on a certain dimension, there's still a small chance that the 11th batch will be different, and Keras will throw an error if a dataset tries to emit a tensor whose shape doesn't match the spec.\r\n\r\nI encountered this bug in practice once or twice for datasets that were mostly-but-not-totally constant on a given dimension, and I still don't have a perfect solution, but this PR should greatly reduce the risk. It samples many more batches, and also samples very small batches (size 2) - this increases the variability, making it more likely that a few outlier samples will be detected.\r\n\r\nIdeally, of course, we'd determine the full output shape analytically, but that's surprisingly tricky when the `collate_fn` can be any arbitrary Python code!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4763\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4763\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4763","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4763","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4763.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4763.patch","merged_at":1662664541000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4762","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4762\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4762\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4762\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4762","id":1321261733,"node_id":"PR_kwDODunzps48RE56","number":4762,"title":"Improve features resolution in streaming","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Just took your comment into account @mariosasko , let me know if it's good for you now :)"],"created_at":1659029291000,"updated_at":1662743859000,"closed_at":1662743730000,"author_association":"MEMBER","active_lock_reason":null,"body":"`IterableDataset._resolve_features` was returning the features sorted alphabetically by column name, which is not consistent with non-streaming. I changed this and used the order of columns from the data themselves. It was causing some inconsistencies in the dataset viewer as well.\r\n\r\nI also fixed `interleave_datasets` that was not filling missing columns with None, because it was not using the columns from `IterableDataset._resolve_features`\r\n\r\ncc @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4762\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":1,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4762\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4762","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4762","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4762.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4762.patch","merged_at":1662743730000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4761","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4761\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4761\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4761\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4761","id":1321068411,"node_id":"I_kwDODunzps5Oved7","number":4761,"title":"parallel searching in multi-gpu setting using faiss","user":{"login":"xwwwwww","id":48146603,"node_id":"MDQ6VXNlcjQ4MTQ2NjAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48146603?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xwwwwww","html_url":"https:\/\/github.com\/xwwwwww","followers_url":"https:\/\/api.github.com\/users\/xwwwwww\/followers","following_url":"https:\/\/api.github.com\/users\/xwwwwww\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xwwwwww\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xwwwwww\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xwwwwww\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xwwwwww\/orgs","repos_url":"https:\/\/api.github.com\/users\/xwwwwww\/repos","events_url":"https:\/\/api.github.com\/users\/xwwwwww\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xwwwwww\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["And I don't see any speed up when increasing the number of GPUs while calling `get_nearest_examples_batch`.","Hi ! Yes search_batch uses FAISS search which happens in parallel across the GPUs\r\n\r\n> And I don't see any speed up when increasing the number of GPUs while calling get_nearest_examples_batch.\r\n\r\nThat's unexpected, can you share the code you're running ?","here is the code snippet\r\n\r\n```python\r\n\r\n# add faiss index\r\nsource_dataset = load_dataset(source_path)\r\nqueries = load_dataset(query_path)\r\ngpu = [0,1,2,3]\r\nsource_dataset.add_faiss_index(\r\n \"embedding\",\r\n device=gpu,\r\n )\r\n\r\n\r\n# batch query\r\nbatch_size = 32\r\nfor i in tqdm(range(0, len(queries), batch_size)):\r\n if i + batch_size >= len(queries):\r\n batched_queries = queries[i:]\r\n else:\r\n batched_queries = queries[i:i+batch_size]\r\n\r\n batched_query_embeddings = np.stack([i for i in batched_queries['embedding']], axis=0)\r\n scores, candidates = source_dataset.get_nearest_examples_batch(\r\n \"embedding\",\r\n batched_query_embeddings,\r\n k=5\r\n )\r\n```","My version of datasets is `2.4.1.dev0`.","The code looks all good to me, do you see all the GPUs being utilized ? What version of faiss are you using ?","I can see the memory usage of all the GPUs.\r\nMy version of `faiss-gpu` is `1.7.2`","It looks all good to me then ^^ though you said you didn't experienced speed improvements by adding more GPUs ? What size is your source dataset and what time differences did you experience ?","query set: 1e6\r\nsource dataset: 1e6\r\nembedding size: 768\r\nindex: Flat\r\ntopk: 20\r\nGPU: V100\r\n\r\nThe time taken to traverse the query set once is about 1.5h, which is almost not influenced by the value of query batch size or the number of GPUs according to my experiments.","Hmmm the number of GPUs should divide the time, something is going wrong. Can you check that adding more GPU does divide the memory used per GPU ? Maybe it can be worth looking at similar issues in the FAISS repository or create a noew issue over there to understand what's going on","> Can you check that adding more GPU does divide the memory used per GPU \r\n\r\nThe memory used per GPU is unchanged while adding more GPU. Is this unexpected?\r\n\r\nI used to think that every GPU loads all the source vectors and the data parallelism is at the query level. \ud83d\ude06 ","> I used to think that every GPU loads all the source vectors and the data parallelism is at the query level. \ud83d\ude06\r\n\r\nOh indeed that's possible, I wasn't sure. Anyway you can check that calling get_nearest_examples_batch simply calls search under the hood: \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/f90f71fbbb33889fe75a3ffc101cdf16a88a3453\/src\/datasets\/search.py#L375","Here is a runnable script. \r\nMulti-GPU searching still does not work in my experiments.\r\n\r\n\r\n```python\r\nimport os\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport datasets\r\nfrom datasets import Dataset\r\n\r\nclass DPRSelector:\r\n\r\n def __init__(self, source, target, index_name, gpu=None):\r\n self.source = source\r\n self.target = target\r\n self.index_name = index_name\r\n\r\n cache_path = 'embedding.faiss'\r\n\r\n if not os.path.exists(cache_path):\r\n self.source.add_faiss_index(\r\n column=\"embedding\",\r\n index_name=index_name,\r\n device=gpu,\r\n )\r\n self.source.save_faiss_index(index_name, cache_path)\r\n else:\r\n self.source.load_faiss_index(\r\n index_name,\r\n cache_path,\r\n device=gpu\r\n )\r\n print('index builded!')\r\n\r\n def build_dataset(self, top_k, batch_size):\r\n print('start search')\r\n\r\n for i in tqdm(range(0, len(self.target), batch_size)):\r\n if i + batch_size >= len(self.target):\r\n batched_queries = self.target[i:]\r\n else:\r\n batched_queries = self.target[i:i+batch_size]\r\n\r\n\r\n batched_query_embeddings = np.stack([i for i in batched_queries['embedding']], axis=0)\r\n search_res = self.source.get_nearest_examples_batch(\r\n self.index_name,\r\n batched_query_embeddings,\r\n k=top_k\r\n )\r\n \r\n print('finish search')\r\n\r\n\r\ndef get_pseudo_dataset():\r\n pseudo_dict = {\"embedding\": np.zeros((1000000, 768), dtype=np.float32)}\r\n print('generate pseudo data')\r\n\r\n dataset = Dataset.from_dict(pseudo_dict)\r\n def list_to_array(data):\r\n return {\"embedding\": [np.array(vector, dtype=np.float32) for vector in data[\"embedding\"]]} \r\n dataset.set_transform(list_to_array, columns='embedding', output_all_columns=True)\r\n\r\n print('build dataset')\r\n return dataset\r\n\r\n\r\n\r\nif __name__==\"__main__\":\r\n\r\n np.random.seed(42)\r\n\r\n\r\n source_dataset = get_pseudo_dataset()\r\n target_dataset = get_pseudo_dataset()\r\n\r\n gpu = [0,1,2,3,4,5,6,7]\r\n selector = DPRSelector(source_dataset, target_dataset, \"embedding\", gpu=gpu)\r\n\r\n selector.build_dataset(top_k=20, batch_size=32)\r\n```","@lhoestq Hi, could you please test the code above if you have time? \ud83d\ude04 ","Maybe @albertvillanova you can take a look ? I won't be available in the following days","@albertvillanova Hi, can you help with this issue?","Hi @xwwwwww I'm investigating it, but I'm not an expert in Faiss. In principle, it is weird that your code does not work properly because it seems right...","Have you tried passing `gpu=-1` and check if there is a speedup?","> Have you tried passing `gpu=-1` and check if there is a speedup?\r\n\r\nyes, there is a speed up using GPU compared with CPU. ","When passing `device=-1`, ALL existing GPUs are used (multi GPU): this is the maximum speedup you can get. To know the number of total GPUs:\r\n```\r\nimport faiss\r\n\r\nngpus = faiss.get_num_gpus()\r\nprint(ngpus)\r\n```\r\n\r\nWhen passing a list of integers to `device`, then only that number of GPUs are used (multi GPU as well)\r\n- the speedup should be proportional (more or less) to the ratio of the number of elements passed to `device` over `ngpus`\r\n- if this is not the case, then there is an issue in the implementation of this use case (however, I have reviewed the code and in principle I can't find any evident bug)\r\n\r\nWhen passing a positive integer to `device`, then only a single GPU is used.\r\n- this time should be more or less proportional to the time when passing `device=-1` over `ngpus`","Thanks for your help!\r\nHave you run the code and replicated the same experimental results (i.e., no speedup while increasing the number of GPUs)?","@albertvillanova @lhoestq Sorry for the bother, is there any progress on this issue? \ud83d\ude03 ","I can confirm `add_faiss_index` calls `index = faiss.index_cpu_to_gpus_list(index, gpus=list(device))`.\r\n\r\nCould this be an issue with your environment ? Could you try running with 1 and 8 GPUs with a code similar to[ this one from the FAISS examples](https:\/\/github.com\/facebookresearch\/faiss\/blob\/main\/tutorial\/python\/5-Multiple-GPUs.py) but using `gpu_index = faiss.index_cpu_to_gpus_list(cpu_index, gpus=list(device))`, and see if the speed changes ?","Hi, I test the FAISS example and the speed indeed changes. I set `nb=1000000`, `nq=1000000` and `d=64`\r\n\r\n| num GPUS | time cost |\r\n| -------- | --------- |\r\n| 1 | 28.53 |\r\n| 5 | 7.16 |\r\n\r\n\r\n\r\n","Ok the benchmark is great, not sure why it doesn't speed up the index in your case though. You can try running the benchmark with the same settings as your actual dataset\r\n```\r\nquery set: 1e6\r\nsource dataset: 1e6\r\nembedding size: 768\r\nindex: Flat\r\ntopk: 20\r\nGPU: V100\r\n```\r\n\r\nNote that you can still pass a FAISS index you built yourself to a dataset using https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/package_reference\/main_classes#datasets.Dataset.add_faiss_index_from_external_arrays","> Here is a runnable script. Multi-GPU searching still does not work in my experiments.\r\n> \r\n> ```python\r\n> import os\r\n> from tqdm import tqdm\r\n> import numpy as np\r\n> import datasets\r\n> from datasets import Dataset\r\n> \r\n> class DPRSelector:\r\n> \r\n> def __init__(self, source, target, index_name, gpu=None):\r\n> self.source = source\r\n> self.target = target\r\n> self.index_name = index_name\r\n> \r\n> cache_path = 'embedding.faiss'\r\n> \r\n> if not os.path.exists(cache_path):\r\n> self.source.add_faiss_index(\r\n> column=\"embedding\",\r\n> index_name=index_name,\r\n> device=gpu,\r\n> )\r\n> self.source.save_faiss_index(index_name, cache_path)\r\n> else:\r\n> self.source.load_faiss_index(\r\n> index_name,\r\n> cache_path,\r\n> device=gpu\r\n> )\r\n> print('index builded!')\r\n> \r\n> def build_dataset(self, top_k, batch_size):\r\n> print('start search')\r\n> \r\n> for i in tqdm(range(0, len(self.target), batch_size)):\r\n> if i + batch_size >= len(self.target):\r\n> batched_queries = self.target[i:]\r\n> else:\r\n> batched_queries = self.target[i:i+batch_size]\r\n> \r\n> \r\n> batched_query_embeddings = np.stack([i for i in batched_queries['embedding']], axis=0)\r\n> search_res = self.source.get_nearest_examples_batch(\r\n> self.index_name,\r\n> batched_query_embeddings,\r\n> k=top_k\r\n> )\r\n> \r\n> print('finish search')\r\n> \r\n> \r\n> def get_pseudo_dataset():\r\n> pseudo_dict = {\"embedding\": np.zeros((1000000, 768), dtype=np.float32)}\r\n> print('generate pseudo data')\r\n> \r\n> dataset = Dataset.from_dict(pseudo_dict)\r\n> def list_to_array(data):\r\n> return {\"embedding\": [np.array(vector, dtype=np.float32) for vector in data[\"embedding\"]]} \r\n> dataset.set_transform(list_to_array, columns='embedding', output_all_columns=True)\r\n> \r\n> print('build dataset')\r\n> return dataset\r\n> \r\n> \r\n> \r\n> if __name__==\"__main__\":\r\n> \r\n> np.random.seed(42)\r\n> \r\n> \r\n> source_dataset = get_pseudo_dataset()\r\n> target_dataset = get_pseudo_dataset()\r\n> \r\n> gpu = [0,1,2,3,4,5,6,7]\r\n> selector = DPRSelector(source_dataset, target_dataset, \"embedding\", gpu=gpu)\r\n> \r\n> selector.build_dataset(top_k=20, batch_size=32)\r\n> ```\r\n\r\nBy the way, have you run this toy example and replicated my experiment results? I think it is a more direct way to figure this out :)"],"created_at":1659020223000,"updated_at":1661566129000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"While I notice that `add_faiss_index` has supported assigning multiple GPUs, I am still confused about how it works. \r\n\r\nDoes the `search-batch` function automatically parallelizes the input queries to different gpus?https:\/\/github.com\/huggingface\/datasets\/blob\/d76599bdd4d186b2e7c4f468b05766016055a0a5\/src\/datasets\/search.py#L360","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4761\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4761\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4760","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4760\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4760\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4760\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4760","id":1320878223,"node_id":"I_kwDODunzps5OuwCP","number":4760,"title":"Issue with offline mode","user":{"login":"SaulLu","id":55560583,"node_id":"MDQ6VXNlcjU1NTYwNTgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55560583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SaulLu","html_url":"https:\/\/github.com\/SaulLu","followers_url":"https:\/\/api.github.com\/users\/SaulLu\/followers","following_url":"https:\/\/api.github.com\/users\/SaulLu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SaulLu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SaulLu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SaulLu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SaulLu\/orgs","repos_url":"https:\/\/api.github.com\/users\/SaulLu\/repos","events_url":"https:\/\/api.github.com\/users\/SaulLu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SaulLu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @SaulLu, thanks for reporting.\r\n\r\nI think offline mode is not supported for datasets containing only data files (without any loading script). I'm having a look into this...","Thanks for your feedback! \r\n\r\nTo give you a little more info, if you don't set the offline mode flag, the script will load the cache. I first noticed this behavior with the `evaluate` library, and while trying to understand the downloading flow I realized that I had a similar error with datasets.","This is an issue we have to fix.","This is related to https:\/\/github.com\/huggingface\/datasets\/issues\/3547"],"created_at":1659012314000,"updated_at":1659024336000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI can't retrieve a cached dataset with offline mode enabled\r\n\r\n## Steps to reproduce the bug\r\n\r\nTo reproduce my issue, first, you'll need to run a script that will cache the dataset\r\n```python\r\nimport os\r\nos.environ[\"HF_DATASETS_OFFLINE\"] = \"0\"\r\n\r\nimport datasets\r\n\r\ndatasets.logging.set_verbosity_info()\r\nds_name = \"SaulLu\/toy_struc_dataset\"\r\nds = datasets.load_dataset(ds_name)\r\nprint(ds)\r\n```\r\nthen, you can try to reload it in offline mode:\r\n```python\r\nimport os\r\nos.environ[\"HF_DATASETS_OFFLINE\"] = \"1\"\r\n\r\nimport datasets\r\n\r\ndatasets.logging.set_verbosity_info()\r\nds_name = \"SaulLu\/toy_struc_dataset\"\r\nds = datasets.load_dataset(ds_name)\r\nprint(ds)\r\n```\r\n\r\n## Expected results\r\nI would have expected the 2nd snippet not to return any errors\r\n\r\n## Actual results\r\nThe 2nd snippet returns:\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/lucile_huggingface_co\/sandbox\/evaluate\/test_cache_datasets.py\", line 8, in \r\n ds = datasets.load_dataset(ds_name)\r\n File \"\/home\/lucile_huggingface_co\/anaconda3\/envs\/evaluate-dev\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1723, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"\/home\/lucile_huggingface_co\/anaconda3\/envs\/evaluate-dev\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1500, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"\/home\/lucile_huggingface_co\/anaconda3\/envs\/evaluate-dev\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1241, in dataset_module_factory\r\n raise ConnectionError(f\"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}\") from None\r\nConnectionError: Couln't reach the Hugging Face Hub for dataset 'SaulLu\/toy_struc_dataset': Offline mode is enabled.\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.17\r\n- Python version: 3.8.13\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n\r\nMaybe I'm misunderstanding something in the use of the offline mode (see [doc](https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/loading#offline)), is that the case?\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4760\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4760\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4759","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4759\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4759\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4759\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4759","id":1320783300,"node_id":"I_kwDODunzps5OuY3E","number":4759,"title":"Dataset Viewer issue for Toygar\/turkish-offensive-language-detection","user":{"login":"toygarr","id":44132720,"node_id":"MDQ6VXNlcjQ0MTMyNzIw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44132720?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/toygarr","html_url":"https:\/\/github.com\/toygarr","followers_url":"https:\/\/api.github.com\/users\/toygarr\/followers","following_url":"https:\/\/api.github.com\/users\/toygarr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/toygarr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/toygarr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/toygarr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/toygarr\/orgs","repos_url":"https:\/\/api.github.com\/users\/toygarr\/repos","events_url":"https:\/\/api.github.com\/users\/toygarr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/toygarr\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I refreshed the dataset viewer manually, it's fixed now. Sorry for the inconvenience.\r\n\"Capture\r\n\r\n"],"created_at":1659007303000,"updated_at":1659014276000,"closed_at":1659014268000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\r\n\r\nhttps:\/\/huggingface.co\/datasets\/Toygar\/turkish-offensive-language-detection\r\n\r\n### Description\r\n\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The dataset does not exist.\r\n\r\nHi, I provided train.csv, test.csv and valid.csv files. However, viewer says dataset does not exist. \r\nShould I need to do anything else?\r\n\r\n### Owner\r\n\r\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4759\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4759\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4757","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4757\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4757\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4757\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4757","id":1320602532,"node_id":"I_kwDODunzps5Otsuk","number":4757,"title":"Document better when relative paths are transformed to URLs","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"assignees":[{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1658997987000,"updated_at":1661452464000,"closed_at":1661452464000,"author_association":"MEMBER","active_lock_reason":null,"body":"As discussed with @ydshieh, when passing a relative path as `data_dir` to `load_dataset` of a dataset hosted on the Hub, the relative path is transformed to the corresponding URL of the Hub dataset.\r\n\r\nCurrently, we mention this in our docs here: [Create a dataset loading script > Download data files and organize splits](https:\/\/huggingface.co\/docs\/datasets\/v2.4.0\/en\/dataset_script#download-data-files-and-organize-splits)\r\n> If the data files live in the same folder or repository of the dataset script, you can just pass the relative paths to the files instead of URLs.\r\n\r\nMaybe we should document better how relative paths are handled, not only when creating a dataset loading script, but also when passing to `load_dataset`:\r\n- `data_dir`\r\n- `data_files`\r\n\r\nCC: @stevhliu ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4757\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4757\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4755","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4755\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4755\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4755\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4755","id":1319687044,"node_id":"I_kwDODunzps5OqNOE","number":4755,"title":"Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size","user":{"login":"srobertjames","id":662612,"node_id":"MDQ6VXNlcjY2MjYxMg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/662612?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/srobertjames","html_url":"https:\/\/github.com\/srobertjames","followers_url":"https:\/\/api.github.com\/users\/srobertjames\/followers","following_url":"https:\/\/api.github.com\/users\/srobertjames\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/srobertjames\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/srobertjames\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/srobertjames\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/srobertjames\/orgs","repos_url":"https:\/\/api.github.com\/users\/srobertjames\/repos","events_url":"https:\/\/api.github.com\/users\/srobertjames\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/srobertjames\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've built a minimal example that shows this bug without `n_proc`. It seems like it's a problem any way of using **tokenizers, `overflow_to_sample_mapping`, and Dataset.map, with a small batch size**:\r\n\r\n```\r\nimport datasets\r\nimport transformers\r\npretrained = 'deepset\/tinyroberta-squad2'\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(pretrained)\r\n\r\nquestions = ['Can you tell me why?', 'What time is it?']\r\ncontexts = ['This is context zero', 'Another paragraph goes here'] \r\n\r\ndef tok(questions, contexts):\r\n return tokenizer(text=questions,\r\n text_pair=contexts,\r\n truncation='only_second',\r\n return_overflowing_tokens=True,\r\n )\r\nprint(tok(questions, contexts)['overflow_to_sample_mapping'])\r\nassert tok(questions, contexts)['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=1)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # FAILS produces [0,0]\r\n```\r\n\r\nNote that even if the batch size would be larger, there will be instances where we will not have a lot of data, and end up using small batches. This can occur e.g. if `n_proc` causes batches to be underfill. I imagine it can also occur in other ways, e.g. the final leftover batch at the end.","A larger batch size does _not_ have this behavior:\r\n\r\n```\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=2)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n```"],"created_at":1658933651000,"updated_at":1658944648000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen using `tokenizer`, we can retrieve the field `overflow_to_sample_mapping`, since long samples will be overflown into multiple token sequences.\r\n\r\nHowever, when tokenizing is done via `Dataset.map`, with `n_proc > 1`, the `overflow_to_sample_mapping` field is wrong. This seems to be because each tokenizer only looks at its share of the samples, and maps to the index _within its share_, but then `Dataset.map` collates them together.\r\n\r\n## Steps to reproduce the bug\r\n\r\n1. Make a dataset of 3 strings.\r\n2. Tokenize via Dataset.map with n_proc = 8\r\n3. Inspect the `overflow_to_sample_mapping` field\r\n\r\n\r\n## Expected results\r\n`[0, 1, 2]`\r\n\r\n## Actual results\r\n`[0, 0, 0]`\r\n\r\nNotes:\r\n\r\n1. I have not yet extracted a minimal example, but the above works reliably\r\n2. If the dataset is large, I've yet to determine if this bug still happens a. not at all b. always c. on the small, leftover batch at the end.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4755\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4755\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4754","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4754\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4754\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4754\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4754","id":1319681541,"node_id":"PR_kwDODunzps48L9p6","number":4754,"title":"Remove \"unkown\" language tags","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658933412000,"updated_at":1658934180000,"closed_at":1658933466000,"author_association":"MEMBER","active_lock_reason":null,"body":"Following https:\/\/github.com\/huggingface\/datasets\/pull\/4753 there was still a \"unknown\" langauge tag in `wikipedia` so the job at https:\/\/github.com\/huggingface\/datasets\/runs\/7542567336?check_suite_focus=true failed for wikipedia","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4754\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4754\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4754","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4754","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4754.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4754.patch","merged_at":1658933466000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4753","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4753\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4753\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4753\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4753","id":1319571745,"node_id":"PR_kwDODunzps48Ll8G","number":4753,"title":"Add `language_bcp47` tag","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658928676000,"updated_at":1658933403000,"closed_at":1658932676000,"author_association":"MEMBER","active_lock_reason":null,"body":"Following (internal) https:\/\/github.com\/huggingface\/moon-landing\/pull\/3509, we need to move the bcp47 tags to `language_bcp47` and keep the `language` tag for iso 639 1-2-3 codes. In particular I made sure that all the tags in `languages` are not longer than 3 characters. I moved the rest to `language_bcp47` and fixed some of them.\r\n\r\nAfter this PR is merged I think we can simplify the language validation from the DatasetMetadata class (and keep it bare-bone just for the tagging app)\r\n\r\nPS: the CI is failing because of missing content in dataset cards that are unrelated to this PR","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4753\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4753\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4753","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4753","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4753.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4753.patch","merged_at":1658932676000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4752","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4752\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4752\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4752\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4752","id":1319464409,"node_id":"I_kwDODunzps5OpW3Z","number":4752,"title":"DatasetInfo issue when testing multiple configs: mixed task_templates","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've narrowed down the issue to the `dataset_module_factory` which already creates a `dataset_infos.json` file down in the `.cache\/modules\/dataset_modules\/..` folder. That JSON file already contains the wrong task_templates for `unfiltered`.","Ugh. Found the issue: apparently `datasets` was reusing the already existing `dataset_infos.json` that is inside `datasets\/datasets\/hebban-reviews`! Is this desired behavior?\r\n\r\nPerhaps when `--save_infos` and `--all_configs` are given, an existing `dataset_infos.json` file should first be deleted before continuing with the test? Because that would assume that the user wants to create a new infos file for all configs anyway.","Hi! I think this is a reasonable solution. Would you be interested in submitting a PR?"],"created_at":1658923494000,"updated_at":1659982850000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nWhen running the `datasets-cli test` it would seem that some config properties in a DatasetInfo get mangled, leading to issues, e.g., about the ClassLabel.\r\n\r\n## Steps to reproduce the bug\r\n\r\nIn summary, what I want to do is create three configs:\r\n- unfiltered: no classlabel, no tasks. Gets data from unfiltered.json.gz (I'd want this without splits, just one chunk of data, but that does not seem possible?)\r\n- filtered_sentiment: `review_sentiment` as ClassLabel, TextClassification task with `review_sentiment` as label. Gets train\/test split from respective json.gz files\r\n- filtered_rating: `review_rating0` as ClassLabel, TextClassification task with `review_rating0` as label. Gets train\/test split from respective json.gz files\r\n\r\nThis might be a bit tedious to reproduce, so I am sorry, but these are the steps:\r\n\r\n- Clone datasets -> `datasets\/` and install it\r\n- Clone `https:\/\/huggingface.co\/datasets\/BramVanroy\/hebban-reviews` into `datasets\/datasets` so that you have a new folder `datasets\/datasets\/hebban-reviews\/`.\r\n- Replace the HebbanReviews class with this new one:\r\n\r\n```python\r\nclass HebbanReviews(datasets.GeneratorBasedBuilder):\r\n \"\"\"The Hebban book reviews dataset.\"\"\"\r\n\r\n BUILDER_CONFIGS = [\r\n HebbanReviewsConfig(\r\n name=\"unfiltered\",\r\n description=_HEBBAN_REVIEWS_UNFILTERED_DESCRIPTION,\r\n version=datasets.Version(_HEBBAN_VERSION)\r\n ),\r\n HebbanReviewsConfig(\r\n name=\"filtered_sentiment\",\r\n description=f\"This config has the negative, neutral, and positive sentiment scores as ClassLabel in the 'review_sentiment' column.\\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}\",\r\n version=datasets.Version(_HEBBAN_VERSION)\r\n ),\r\n HebbanReviewsConfig(\r\n name=\"filtered_rating\",\r\n description=f\"This config has the 5-class ratings as ClassLabel in the 'review_rating0' column (which is a variant of 'review_rating' that starts counting from 0 instead of 1).\\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}\",\r\n version=datasets.Version(_HEBBAN_VERSION)\r\n )\r\n ]\r\n\r\n DEFAULT_CONFIG_NAME = \"filtered_sentiment\"\r\n\r\n _URLS = {\r\n \"train\": \"train.jsonl.gz\",\r\n \"test\": \"test.jsonl.gz\",\r\n \"unfiltered\": \"unfiltered.jsonl.gz\",\r\n }\r\n\r\n def _info(self):\r\n features = {\r\n \"review_title\": datasets.Value(\"string\"),\r\n \"review_text\": datasets.Value(\"string\"),\r\n \"review_text_without_quotes\": datasets.Value(\"string\"),\r\n \"review_n_quotes\": datasets.Value(\"int32\"),\r\n \"review_n_tokens\": datasets.Value(\"int32\"),\r\n \"review_rating\": datasets.Value(\"int32\"),\r\n \"review_rating0\": datasets.Value(\"int32\"),\r\n \"review_author_url\": datasets.Value(\"string\"),\r\n \"review_author_type\": datasets.Value(\"string\"),\r\n \"review_n_likes\": datasets.Value(\"int32\"),\r\n \"review_n_comments\": datasets.Value(\"int32\"),\r\n \"review_url\": datasets.Value(\"string\"),\r\n \"review_published_date\": datasets.Value(\"string\"),\r\n \"review_crawl_date\": datasets.Value(\"string\"),\r\n \"lid\": datasets.Value(\"string\"),\r\n \"lid_probability\": datasets.Value(\"float32\"),\r\n \"review_sentiment\": datasets.features.ClassLabel(names=[\"negative\", \"neutral\", \"positive\"]),\r\n \"review_sentiment_label\": datasets.Value(\"string\"),\r\n \"book_id\": datasets.Value(\"int32\"),\r\n }\r\n\r\n if self.config.name == \"filtered_sentiment\":\r\n task_templates = [datasets.TextClassification(text_column=\"review_text_without_quotes\", label_column=\"review_sentiment\")]\r\n elif self.config.name == \"filtered_rating\":\r\n # For CrossEntropy, our classes need to start at index 0 -- not 1\r\n features[\"review_rating0\"] = datasets.features.ClassLabel(names=[\"1\", \"2\", \"3\", \"4\", \"5\"])\r\n features[\"review_sentiment\"] = datasets.Value(\"int32\")\r\n task_templates = [datasets.TextClassification(text_column=\"review_text_without_quotes\", label_column=\"review_rating0\")]\r\n elif self.config.name == \"unfiltered\": # no ClassLabels in unfiltered\r\n features[\"review_sentiment\"] = datasets.Value(\"int32\")\r\n task_templates = None\r\n else:\r\n raise ValueError(f\"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default),\"\r\n f\" 'filtered_rating', or 'unfiltered'\")\r\n print(\"AT INFO\", self.config.name, task_templates)\r\n return datasets.DatasetInfo(\r\n description=self.config.description,\r\n features=datasets.Features(features),\r\n homepage=\"https:\/\/huggingface.co\/datasets\/BramVanroy\/hebban-reviews\",\r\n citation=_HEBBAN_REVIEWS_CITATION,\r\n task_templates=task_templates,\r\n license=\"cc-by-4.0\"\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n if self.config.name.startswith(\"filtered\"):\r\n files = dl_manager.download_and_extract({\"train\": \"train.jsonl.gz\",\r\n \"test\": \"test.jsonl.gz\"})\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"data_file\": files[\"train\"]\r\n },\r\n ),\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TEST,\r\n gen_kwargs={\r\n \"data_file\": files[\"test\"]\r\n },\r\n ),\r\n ]\r\n elif self.config.name == \"unfiltered\":\r\n files = dl_manager.download_and_extract({\"train\": \"unfiltered.jsonl.gz\"})\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"data_file\": files[\"train\"]\r\n },\r\n ),\r\n ]\r\n else:\r\n raise ValueError(f\"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default),\"\r\n f\" 'filtered_rating', or 'unfiltered'\")\r\n\r\n def _generate_examples(self, data_file):\r\n lines = Path(data_file).open(encoding=\"utf-8\").readlines()\r\n for line_idx, line in enumerate(lines):\r\n row = json.loads(line)\r\n yield line_idx, row\r\n```\r\n\r\n- finally, run `datasets-cli test .\/datasets\/hebban-reviews\/ --save_infos --all_configs` from within the topmost `datasets` directory\r\n\r\n## Expected results\r\nSucceeding tests for three different configs.\r\n\r\n## Actual results\r\n\r\nI printed out the values that are given to `DatasetInfo` for config name and task_templates, as you can see. There, as expected, I get `unfiltered None`. I also modified datasets\/info.py and added this line [at L.170](https:\/\/github.com\/huggingface\/datasets\/blob\/f5847a304aa1b38b3a3c54a8318b4df60f1299bc\/src\/datasets\/info.py#L170):\r\n\r\n```python\r\nprint(\"INTERNALLY AT INFO.PY\", self.config_name, self.task_templates)\r\n```\r\n\r\nto my surprise, here I get `unfiltered [TextClassification(task='text-classification', text_column='review_text_without_quotes', label_column='review_sentiment')]`. So one way or another, here I suddenly see that `unfiltered` now does have a task_template -- even though that is not what is written in the data loading script, as the first print statement correctly shows.\r\n\r\nI do not quite understand how, but it seems that the config name and task_templates get mixed.\r\n\r\nThis ultimately leads to the following error, but this trace may not be very useful in itself:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\bramv\\.virtualenvs\\hebban-U6poXNQd\\Scripts\\datasets-cli-script.py\", line 33, in \r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\commands\\datasets_cli.py\", line 39, in main\r\n service.run()\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\commands\\test.py\", line 144, in run\r\n builder.as_dataset()\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\builder.py\", line 899, in as_dataset\r\n datasets = map_nested(\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\utils\\py_utils.py\", line 393, in map_nested\r\n mapped = [\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\utils\\py_utils.py\", line 394, in \r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\utils\\py_utils.py\", line 330, in _single_map_nested\r\n return function(data_struct)\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\builder.py\", line 930, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\builder.py\", line 1006, in _as_dataset\r\n return Dataset(fingerprint=fingerprint, **dataset_kwargs)\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\arrow_dataset.py\", line 661, in __init__\r\n info = info.copy() if info is not None else DatasetInfo()\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\info.py\", line 286, in copy\r\n return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})\r\n File \"\", line 20, in __init__\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\info.py\", line 176, in __post_init__\r\n self.task_templates = [\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\info.py\", line 177, in \r\n template.align_with_features(self.features) for template in (self.task_templates)\r\n File \"c:\\dev\\python\\hebban\\datasets\\src\\datasets\\tasks\\text_classification.py\", line 22, in align_with_features\r\n raise ValueError(f\"Column {self.label_column} is not a ClassLabel.\")\r\nValueError: Column review_sentiment is not a ClassLabel.\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.1.dev0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.8\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4752\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4752\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4751","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4751\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4751\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4751\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4751","id":1319440903,"node_id":"PR_kwDODunzps48LJ7U","number":4751,"title":"Added dataset information in clinic oos dataset card","user":{"login":"Arnav-Ladkat","id":84362194,"node_id":"MDQ6VXNlcjg0MzYyMTk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/84362194?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Arnav-Ladkat","html_url":"https:\/\/github.com\/Arnav-Ladkat","followers_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/followers","following_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/orgs","repos_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/repos","events_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Arnav-Ladkat\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658922268000,"updated_at":1659005601000,"closed_at":1659004837000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR aims to add relevant information like the Description, Language and citation information of the clinic oos dataset card.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4751\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4751\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4751","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4751","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4751.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4751.patch","merged_at":1659004837000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4750","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4750\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4750\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4750\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4750","id":1319333645,"node_id":"I_kwDODunzps5Oo28N","number":4750,"title":"Easily create loading script for benchmark comprising multiple huggingface datasets","user":{"login":"JoelNiklaus","id":3775944,"node_id":"MDQ6VXNlcjM3NzU5NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3775944?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JoelNiklaus","html_url":"https:\/\/github.com\/JoelNiklaus","followers_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/followers","following_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/orgs","repos_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/repos","events_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I think the simplest is to copy paste the `_split_generators` code from the other datasets and do a bunch of if-else, as in the glue dataset: https:\/\/huggingface.co\/datasets\/glue\/blob\/main\/glue.py#L467","Ok, I see. Thank you"],"created_at":1658916818000,"updated_at":1658930287000,"closed_at":1658930287000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hi,\r\n\r\nI would like to create a loading script for a benchmark comprising multiple huggingface datasets.\r\nThe function _split_generators needs to return the files for the respective dataset. However, the files are not always in the same location for each dataset. I want to just make a wrapper dataset that provides a single interface to all the underlying datasets. \r\nI thought about downloading the files with the load_dataset function and then providing the link to the cached file. But this seems a bit inelegant to me. What approach would you propose to do this?\r\n\r\nPlease let me know if you have any questions.\r\n\r\nCheers,\r\nJoel","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4750\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4750\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4748","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4748\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4748\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4748\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4748","id":1318874913,"node_id":"PR_kwDODunzps48JTEb","number":4748,"title":"Add image classification processing guide","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658880671000,"updated_at":1658942901000,"closed_at":1658942172000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR follows up on #4710 to separate the object detection and image classification guides. It expands a little more on the original guide to include a more complete example of loading and transforming a whole dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4748\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4748\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4748","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4748","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4748.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4748.patch","merged_at":1658942172000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4747","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4747\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4747\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4747\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4747","id":1318586932,"node_id":"PR_kwDODunzps48IWKj","number":4747,"title":"Shard parquet in `download_and_prepare`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","This is ready for review cc @mariosasko :) please let me know what you think !"],"created_at":1658858701000,"updated_at":1663249435000,"closed_at":1663249286000,"author_association":"MEMBER","active_lock_reason":null,"body":"Following https:\/\/github.com\/huggingface\/datasets\/pull\/4724 (needs to be merged first)\r\n\r\nIt's good practice to shard parquet files to enable parallelism with spark\/dask\/etc.\r\n\r\nI added the `max_shard_size` parameter to `download_and_prepare` (default to 500MB for parquet, and None for arrow).\r\n\r\n```python\r\nfrom datasets import *\r\n\r\noutput_dir = \".\/output_dir\" # also supports \"s3:\/\/...\"\r\nbuilder = load_dataset_builder(\"squad\")\r\nbuilder.download_and_prepare(output_dir, file_format=\"parquet\", max_shard_size=\"5MB\")\r\n```\r\n\r\n### Implementation details\r\n\r\nThe examples are written to a parquet file until `ParquetWriter._num_bytes > max_shard_size`. When this happens, a new writer is instantiated to start writing the next shard. At the end, all the shards are renamed to include the total number of shards in their names: `{builder.name}-{split}-{shard_id:05d}-of-{num_shards:05d}.parquet`\r\n\r\nI also added the `MAX_SHARD_SIZE` config variable (default to 500MB)\r\n\r\nTODO:\r\n- [x] docstrings\r\n- [x] docs\r\n- [x] tests\r\n\r\ncc @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4747\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4747\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4747","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4747","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4747.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4747.patch","merged_at":1663249286000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4746","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4746\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4746\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4746\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4746","id":1318486599,"node_id":"I_kwDODunzps5OloJH","number":4746,"title":"Dataset Viewer issue for yanekyuk\/wikikey","user":{"login":"ai-ashok","id":91247690,"node_id":"MDQ6VXNlcjkxMjQ3Njkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/91247690?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ai-ashok","html_url":"https:\/\/github.com\/ai-ashok","followers_url":"https:\/\/api.github.com\/users\/ai-ashok\/followers","following_url":"https:\/\/api.github.com\/users\/ai-ashok\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ai-ashok\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ai-ashok\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ai-ashok\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ai-ashok\/orgs","repos_url":"https:\/\/api.github.com\/users\/ai-ashok\/repos","events_url":"https:\/\/api.github.com\/users\/ai-ashok\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ai-ashok\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["The dataset is empty, as far as I can tell: there are no files in the repository at https:\/\/huggingface.co\/datasets\/yanekyuk\/wikikey\/tree\/main\r\n\r\nMaybe the viewer can display a better message for empty datasets","OK. Closing as it's not an error. We will work on making the error message a lot clearer."],"created_at":1658852716000,"updated_at":1662624922000,"closed_at":1662624922000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4746\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4746\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4745","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4745\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4745\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4745\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4745","id":1318016655,"node_id":"I_kwDODunzps5Oj1aP","number":4745,"title":"Allow `list_datasets` to include private datasets","user":{"login":"ola13","id":1528523,"node_id":"MDQ6VXNlcjE1Mjg1MjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1528523?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ola13","html_url":"https:\/\/github.com\/ola13","followers_url":"https:\/\/api.github.com\/users\/ola13\/followers","following_url":"https:\/\/api.github.com\/users\/ola13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ola13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ola13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ola13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ola13\/orgs","repos_url":"https:\/\/api.github.com\/users\/ola13\/repos","events_url":"https:\/\/api.github.com\/users\/ola13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ola13\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for opening this issue :)\r\n\r\nIf it can help, I think you can already use `huggingface_hub` to achieve this:\r\n```python\r\n>>> from huggingface_hub import HfApi\r\n>>> [ds_info.id for ds_info in HfApi().list_datasets(use_auth_token=token) if ds_info.private]\r\n['bigscience\/xxxx', 'bigscience-catalogue-data\/xxxxxxx', ... ]\r\n```\r\n\r\n---------\r\n\r\nThough the latest versions of `huggingface_hub` that contain this feature are not available on python 3.6, so maybe we should first drop support for python 3.6 (see #4460) to update `list_datasets` in `datasets` as well (or we would have to copy\/paste some `huggingface_hub` code)","Great, thanks @lhoestq the workaround works! I think it would be intuitive to have the support directly in `datasets` but it makes sense to wait given that the workaround exists :)","i also think that going forward we should replace more and more implementations inside datasets with the corresponding ones from `huggingface_hub` (same as we're doing in `transformers`)"],"created_at":1658830568000,"updated_at":1658836765000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"I am working with a large collection of private datasets, it would be convenient for me to be able to list them.\r\n\r\nI would envision extending the convention of using `use_auth_token` keyword argument to `list_datasets` function, then calling:\r\n\r\n```\r\nlist_datasets(use_auth_token=\"my_token\")\r\n```\r\n\r\nwould return the list of all datasets I have permissions to view, including private ones. The only current alternative I see is to use the hub website to manually obtain the list of dataset names - this is in the context of BigScience where respective private spaces contain hundreds of datasets, so not very convenient to list manually.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4745\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4745\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4744","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4744\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4744\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4744\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4744","id":1317822345,"node_id":"I_kwDODunzps5OjF-J","number":4744,"title":"Remove instructions to generate dummy data from our docs","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"assignees":[{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Note that for me personally, conceptually all the dummy data (even for \"canonical\" datasets) should be superseded by `datasets-server`, which performs some kind of CI\/CD of datasets (including the canonical ones)","I totally agree: next step should be rethinking if dummy data makes sense for canonical datasets (once we have datasets-server) and eventually remove it.\r\n\r\nBut for now, we could at least start by removing the indication to generate dummy data from our docs."],"created_at":1658820778000,"updated_at":1659484230000,"closed_at":1659484230000,"author_association":"MEMBER","active_lock_reason":null,"body":"In our docs, we indicate to generate the dummy data: https:\/\/huggingface.co\/docs\/datasets\/dataset_script#testing-data-and-checksum-metadata\r\n\r\nHowever:\r\n- dummy data makes sense only for datasets in our GitHub repo: so that we can test their loading with our CI\r\n- for datasets on the Hub:\r\n - they do not pass any CI test requiring dummy data\r\n - there are no instructions on how they can test their dataset locally using the dummy data\r\n - the generation of the dummy data assumes our GitHub directory structure:\r\n - the dummy data will be generated under `.\/datasets\/\/dummy` even if locally there is no `.\/datasets` directory (which is the usual case). See issue:\r\n - #4742 \r\n\r\nCC: @stevhliu ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4744\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4744\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4743","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4743\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4743\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4743\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4743","id":1317362561,"node_id":"PR_kwDODunzps48EUFs","number":4743,"title":"Update map docs","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658782775000,"updated_at":1658938924000,"closed_at":1658938204000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR updates the `map` docs for processing text to include `return_tensors=\"np\"` to make it run faster (see #4676).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4743\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4743\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4743","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4743","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4743.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4743.patch","merged_at":1658938204000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4742","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4742\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4742\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4742\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4742","id":1317260663,"node_id":"I_kwDODunzps5Og813","number":4742,"title":"Dummy data nowhere to be found","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst of all, please note that you do not need the dummy data: this was the case when we were adding datasets to the `datasets` library (on this GitHub repo), so that we could test the correct loading of all datasets with our CI. However, this is no longer the case for datasets on the Hub.\r\n- We should definitely update our docs.\r\n\r\nSecond, the dummy data is generated locally:\r\n- in your case, the dummy data will be generated inside the directory: `.\/datasets\/hebban-reviews\/dummy`\r\n- please note the preceding `.\/datasets` directory: the reason for this is that the command to generate the dummy data was specifically created for our `datasets` library, and therefore assumes our directory structure: commands are run from the root directory of our GitHub repo, and datasets scripts are under `.\/datasets` \r\n\r\n\r\n ","I have opened an Issue to update the instructions on dummy data generation:\r\n- #4744"],"created_at":1658776722000,"updated_at":1658820827000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nTo finalize my dataset, I wanted to create dummy data as per the guide and I ran \r\n\r\n```shell\r\n datasets-cli dummy_data datasets\/hebban-reviews --auto_generate\r\n```\r\n\r\nwhere hebban-reviews is [this repo](https:\/\/huggingface.co\/datasets\/BramVanroy\/hebban-reviews). And even though the scripts runs and shows a message at the end that it succeeded, I cannot find the dummy data anywhere. Where is it?\r\n\r\n## Expected results\r\n\r\nTo see the dummy data in the datasets' folder or in the folder where I ran the command.\r\n\r\n## Actual results\r\n\r\nI see the following message but I cannot find the dummy data anywhere.\r\n\r\n```\r\nDummy data generation done and dummy data test succeeded for config 'filtered''.\r\nAutomatic dummy data generation succeeded for all configs of '.\\datasets\\hebban-reviews\\'\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.4.1.dev0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.8\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4742\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4742\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4741","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4741\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4741\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4741\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4741","id":1316621272,"node_id":"PR_kwDODunzps48B2fl","number":4741,"title":"Fix to dict conversion of `DatasetInfo`\/`Features`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658745687000,"updated_at":1658753436000,"closed_at":1658752673000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4681","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4741\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4741\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4741","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4741","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4741.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4741.patch","merged_at":1658752673000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4740","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4740\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4740\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4740\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4740","id":1316478007,"node_id":"PR_kwDODunzps48BX5l","number":4740,"title":"Fix multiprocessing in map_nested ","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq as a workaround to preserve previous behavior, the parameter `multiprocessing_min_length=16` is passed from `download` to `map_nested`, so that multiprocessing is only used if at least 16 files to be downloaded.\r\n\r\nNote that there is a small breaking change (I think previously it was unintended behavior, so that I have fixed it):\r\n- Before (with default `num_proc=16`) if there were 16 files to be downloaded, multiprocessing was not used\r\n- Now (with default `num_proc=16`) if there are 16 files to be downloaded, multiprocessing is used","Thanks for the workaround !"],"created_at":1658738659000,"updated_at":1659005603000,"closed_at":1659004831000,"author_association":"MEMBER","active_lock_reason":null,"body":"As previously discussed:\r\n\r\nBefore, multiprocessing was not used in `map_nested` if `num_proc` was greater than or equal to `len(iterable)`.\r\n- Multiprocessing was not used e.g. when passing `num_proc=20` but having 19 files to download\r\n- As by default, `DownloadManager` sets `num_proc=16`, before multiprocessing was only used when `len(iterable)>16` by default\r\n\r\nNow, if `num_proc` is greater than or equal to ``len(iterable)``, `num_proc` is set to ``len(iterable)`` and multiprocessing is used.\r\n- We pass the variable `parallel_min_length=16`, so that multiprocessing is only used if at least 16 files to be downloaded\r\n- ~As by default, `DownloadManager` sets `num_proc=16`, now multiprocessing is used when `len(iterable)>1` by default~\r\n\r\nSee discussion below.\r\n\r\n~After having had to fix some tests (87602ac), I am wondering:~\r\n- ~do we want to have multiprocessing by default?~\r\n - ~please note that `DownloadManager.download` sets `num_proc=16` by default~\r\n- ~or would it be better to ask the user to set it explicitly if they want multiprocessing (and default to `num_proc=1`)?~\r\n\r\nFix #4636.\r\n\r\nCC: @nateraw ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4740\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4740\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4740","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4740","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4740.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4740.patch","merged_at":1659004831000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4739","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4739\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4739\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4739\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4739","id":1316400915,"node_id":"PR_kwDODunzps48BHdE","number":4739,"title":"Deprecate metrics","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I mark this as Draft because the deprecated version number needs being updated after the latest release.","Perhaps now is the time to also update the `inspect_metric` from `evaluate` with the changes introduced in https:\/\/github.com\/huggingface\/datasets\/pull\/4433 (cc @lvwerra) ","What do you think of including what changes users have to do to switch to `evaluate` in the warning message ?\r\n(basically replace `datasets.load_metric` by `evaluate.load`)\r\n\r\nI think it can help users migrate to `evaluate` and silence the warnings"],"created_at":1658734555000,"updated_at":1659008667000,"closed_at":1659007936000,"author_association":"MEMBER","active_lock_reason":null,"body":"Deprecate metrics:\r\n- deprecate public functions: `load_metric`, `list_metrics` and `inspect_metric`: docstring and warning\r\n- test deprecation warnings are issues\r\n- deprecate metrics in all docs\r\n- remove mentions to metrics in docs and README\r\n- deprecate internal functions\/classes\r\n\r\nMaybe we should also stop testing metrics?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4739\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4739\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4739","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4739","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4739.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4739.patch","merged_at":1659007936000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4738","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4738\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4738\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4738\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4738","id":1315222166,"node_id":"PR_kwDODunzps479hq4","number":4738,"title":"Use CI unit\/integration tests","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I think this PR can be merged. Willing to see it in action.\r\n\r\nCC: @lhoestq "],"created_at":1658508480000,"updated_at":1658866762000,"closed_at":1658866025000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- Implements separate unit\/integration tests\r\n- A fail in integration tests does not cancel the rest of the jobs\r\n - We should implement more robust integration tests: work in progress in a subsequent PR\r\n- For the moment, test involving network requests are marked as integration: to be evolved","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4738\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4738\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4738","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4738","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4738.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4738.patch","merged_at":1658866025000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4737","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4737\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4737\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4737\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4737","id":1315011004,"node_id":"I_kwDODunzps5OYXm8","number":4737,"title":"Download error on scene_parse_150","user":{"login":"juliensimon","id":3436143,"node_id":"MDQ6VXNlcjM0MzYxNDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3436143?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/juliensimon","html_url":"https:\/\/github.com\/juliensimon","followers_url":"https:\/\/api.github.com\/users\/juliensimon\/followers","following_url":"https:\/\/api.github.com\/users\/juliensimon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/juliensimon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/juliensimon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/juliensimon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/juliensimon\/orgs","repos_url":"https:\/\/api.github.com\/users\/juliensimon\/repos","events_url":"https:\/\/api.github.com\/users\/juliensimon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/juliensimon\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! The server with the data seems to be down. I've reported this issue (https:\/\/github.com\/CSAILVision\/sceneparsing\/issues\/34) in the dataset repo. ","The URL seems to work now, and therefore the script as well."],"created_at":1658496508000,"updated_at":1662046631000,"closed_at":1662046631000,"author_association":"NONE","active_lock_reason":null,"body":"```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"scene_parse_150\", \"scene_parsing\")\r\n\r\nFileNotFoundError: Couldn't find file at http:\/\/data.csail.mit.edu\/places\/ADEchallenge\/ADEChallengeData2016.zip\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4737\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4737\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4736","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4736\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4736\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4736\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4736","id":1314931996,"node_id":"I_kwDODunzps5OYEUc","number":4736,"title":"Dataset Viewer issue for deepklarity\/huggingface-spaces-dataset","user":{"login":"dk-crazydiv","id":47515542,"node_id":"MDQ6VXNlcjQ3NTE1NTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47515542?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dk-crazydiv","html_url":"https:\/\/github.com\/dk-crazydiv","followers_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/followers","following_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/orgs","repos_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/repos","events_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dk-crazydiv\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting. You're right, workers were under-provisioned due to a manual error, and the job queue was full. It's fixed now."],"created_at":1658492058000,"updated_at":1658497598000,"closed_at":1658497598000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/deepklarity\/huggingface-spaces-dataset\/viewer\/deepklarity--huggingface-spaces-dataset\/train\n\n### Description\n\nHi Team, \r\nI'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is csv, so I'm not sure if it's supposed to take this much time or not. \r\n```\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The split is being processed. Retry later.\r\n```\r\n\r\nIs there any explicit step to be taken to get the viewer to work? \n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4736\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4736\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4735","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4735\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4735\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4735\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4735","id":1314501641,"node_id":"PR_kwDODunzps477CuP","number":4735,"title":"Pin rouge_score test dependency","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658474301000,"updated_at":1658476694000,"closed_at":1658475918000,"author_association":"MEMBER","active_lock_reason":null,"body":"Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed.\r\n\r\nFix #4734 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4735\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4735\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4735","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4735","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4735.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4735.patch","merged_at":1658475918000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4734","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4734\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4734\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4734\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4734","id":1314495382,"node_id":"I_kwDODunzps5OWZuW","number":4734,"title":"Package rouge-score cannot be imported","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["We have added a comment on an existing issue opened in their repo: https:\/\/github.com\/google-research\/google-research\/issues\/1212#issuecomment-1192267130\r\n- https:\/\/github.com\/google-research\/google-research\/issues\/1212"],"created_at":1658474105000,"updated_at":1658475919000,"closed_at":1658475918000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nAfter the today release of `rouge_score-0.0.7` it seems no longer importable. Our CI fails: https:\/\/github.com\/huggingface\/datasets\/runs\/7463218591?check_suite_focus=true\r\n```\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_builder_class_bigbench\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_builder_configs_bigbench\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_bigbench\r\nFAILED tests\/test_metric_common.py::LocalMetricTest::test_load_metric_rouge\r\n```\r\nwith errors:\r\n```\r\n> from rouge_score import rouge_scorer\r\nE ModuleNotFoundError: No module named 'rouge_score'\r\n```\r\n```\r\nE ImportError: To be able to use rouge, you need to install the following dependency: rouge_score.\r\nE Please install it using 'pip install rouge_score' for instance'\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4734\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4734\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4733","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4733\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4733\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4733\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4733","id":1314479616,"node_id":"I_kwDODunzps5OWV4A","number":4733,"title":"rouge metric","user":{"login":"asking28","id":29248466,"node_id":"MDQ6VXNlcjI5MjQ4NDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29248466?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/asking28","html_url":"https:\/\/github.com\/asking28","followers_url":"https:\/\/api.github.com\/users\/asking28\/followers","following_url":"https:\/\/api.github.com\/users\/asking28\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/asking28\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/asking28\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/asking28\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/asking28\/orgs","repos_url":"https:\/\/api.github.com\/users\/asking28\/repos","events_url":"https:\/\/api.github.com\/users\/asking28\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/asking28\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Fixed by:\r\n- #4735"],"created_at":1658473611000,"updated_at":1658480882000,"closed_at":1658480735000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\nLoading Rouge metric gives error after latest rouge-score==0.0.7 release.\r\nDowngrading rougemetric==0.0.4 works fine.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\nfrom rouge_score import rouge_scorer, scoring \r\nshould run\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\nFile \"\/root\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/rouge\/0ffdb60f436bdb8884d5e4d608d53dbe108e82dac4f494a66f80ef3f647c104f\/rouge.py\", line 21, in \r\n from rouge_score import rouge_scorer, scoring\r\nImportError: cannot import name 'rouge_scorer' from 'rouge_score' (unknown location)\r\n\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform: Linux\r\n- Python version:3.9\r\n- PyArrow version:\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4733\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4733\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4732","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4732\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4732\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4732\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4732","id":1314371566,"node_id":"I_kwDODunzps5OV7fu","number":4732,"title":"Document better that loading a dataset passing its name does not use the local script","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the feedback!\r\n\r\nI think since this issue is closely related to loading, I can add a clearer explanation under [Load > local loading script](https:\/\/huggingface.co\/docs\/datasets\/main\/en\/loading#local-loading-script).","That makes sense but I think having a line about it under https:\/\/huggingface.co\/docs\/datasets\/installation#source the \"source\" header here would be useful. My mental model of `pip install -e .` does not include the fact that the source files aren't actually being used. ","Thanks for sharing your perspective. I think the `load_dataset` function is the only one that pulls from GitHub, and since this use-case is very specific, I don't think we need to include such a broad clarification in the Installation section.\r\n\r\nFeel free to check out the linked PR and let me know if it needs any additional explanation \ud83d\ude0a"],"created_at":1658470051000,"updated_at":1661272343000,"closed_at":1661272343000,"author_association":"MEMBER","active_lock_reason":null,"body":"As reported by @TrentBrick here https:\/\/github.com\/huggingface\/datasets\/issues\/4725#issuecomment-1191858596, it could be more clear that loading a dataset by passing its name does not use the (modified) local script of it.\r\n\r\nWhat he did:\r\n- he installed `datasets` from source\r\n- he modified locally `datasets\/the_pile\/the_pile.py` loading script\r\n- he tried to load it but using `load_dataset(\"the_pile\")` instead of `load_dataset(\"datasets\/the_pile\")`\r\n - as explained here https:\/\/github.com\/huggingface\/datasets\/issues\/4725#issuecomment-1191040245:\r\n - the former does not use the local script, but instead it downloads a copy of `the_pile.py` from our GitHub, caches it locally (inside `~\/.cache\/huggingface\/modules`) and uses that.\r\n\r\nHe suggests adding a more clear explanation about this. He suggests adding it maybe in [Installation > source](https:\/\/huggingface.co\/docs\/datasets\/installation))\r\n\r\nCC: @stevhliu ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4732\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4732\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4731","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4731\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4731\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4731\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4731","id":1313773348,"node_id":"PR_kwDODunzps474dlZ","number":4731,"title":"docs: \u270f\ufe0f fix TranslationVariableLanguages example","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658435741000,"updated_at":1658473260000,"closed_at":1658472522000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4731\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4731\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4731","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4731","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4731.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4731.patch","merged_at":1658472522000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4730","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4730\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4730\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4730\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4730","id":1313421263,"node_id":"I_kwDODunzps5OSTfP","number":4730,"title":"Loading imagenet-1k validation split takes much more RAM than expected","user":{"login":"fxmarty","id":9808326,"node_id":"MDQ6VXNlcjk4MDgzMjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9808326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fxmarty","html_url":"https:\/\/github.com\/fxmarty","followers_url":"https:\/\/api.github.com\/users\/fxmarty\/followers","following_url":"https:\/\/api.github.com\/users\/fxmarty\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fxmarty\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fxmarty\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fxmarty\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fxmarty\/orgs","repos_url":"https:\/\/api.github.com\/users\/fxmarty\/repos","events_url":"https:\/\/api.github.com\/users\/fxmarty\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fxmarty\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["My bad, `482 * 418 * 50000 * 3 \/ 1000000 = 30221 MB` ( https:\/\/stackoverflow.com\/a\/42979315 ).\r\n\r\nMeanwhile `256 * 256 * 50000 * 3 \/ 1000000 = 9830 MB`. We are loading the non-cropped images and that is why we take so much RAM."],"created_at":1658416446000,"updated_at":1658421664000,"closed_at":1658421664000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nLoading into memory the validation split of imagenet-1k takes much more RAM than expected. Assuming ImageNet-1k is 150 GB, split is 50000 validation images and 1,281,167 train images, I would expect only about 6 GB loaded in RAM.\r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagenet-1k\", split=\"validation\")\r\n\r\nprint(dataset)\r\n\r\n\"\"\"prints\r\nDataset({\r\n features: ['image', 'label'],\r\n num_rows: 50000\r\n})\r\n\"\"\"\r\n\r\npipe_inputs = dataset[\"image\"]\r\n# and wait :-)\r\n```\r\n\r\n## Expected results\r\nUse only < 10 GB RAM when loading the images.\r\n\r\n## Actual results\r\n![image](https:\/\/user-images.githubusercontent.com\/9808326\/180249183-62f75ca4-d127-402a-9330-f12825a22b0a.png)\r\n\r\n```\r\nUsing custom data configuration default\r\nReusing dataset imagenet-1k (\/home\/fxmarty\/.cache\/huggingface\/datasets\/imagenet-1k\/default\/1.0.0\/a1e9bfc56c3a7350165007d1176b15e9128fcaf9ab972147840529aed3ae52bc)\r\nKilled\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.3.dev0\r\n- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.12\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.3.5\r\n- datasets commit: 4e4222f1b6362c2788aec0dd2cd8cede6dd17b80\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4730\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4730\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4729","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4729\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4729\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4729\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4729","id":1313374015,"node_id":"PR_kwDODunzps473GmR","number":4729,"title":"Refactor Hub tests","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658414593000,"updated_at":1658502589000,"closed_at":1658501789000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR refactors `test_upstream_hub` by removing unittests and using the following pytest Hub fixtures:\r\n- `ci_hub_config`\r\n- `set_ci_hub_access_token`: to replace setUp\/tearDown\r\n- `temporary_repo` context manager: to replace `try... finally`\r\n- `cleanup_repo`: to delete repo accidentally created if one of the tests fails\r\n\r\nThis is a preliminary work done to manage unit\/integration tests separately.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4729\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4729\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4729","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4729","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4729.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4729.patch","merged_at":1658501789000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4728","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4728\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4728\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4728\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4728","id":1312897454,"node_id":"I_kwDODunzps5OQTmu","number":4728,"title":"load_dataset gives \"403\" error when using Financial Phrasebank","user":{"login":"rohitvincent","id":2209134,"node_id":"MDQ6VXNlcjIyMDkxMzQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2209134?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rohitvincent","html_url":"https:\/\/github.com\/rohitvincent","followers_url":"https:\/\/api.github.com\/users\/rohitvincent\/followers","following_url":"https:\/\/api.github.com\/users\/rohitvincent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rohitvincent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rohitvincent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rohitvincent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rohitvincent\/orgs","repos_url":"https:\/\/api.github.com\/users\/rohitvincent\/repos","events_url":"https:\/\/api.github.com\/users\/rohitvincent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rohitvincent\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @rohitvincent, thanks for reporting.\r\n\r\nUnfortunately I'm not able to reproduce your issue:\r\n```python\r\nIn [2]: from datasets import load_dataset, DownloadMode\r\n ...: load_dataset(path='financial_phrasebank',name='sentences_allagree', download_mode=\"force_redownload\")\r\nDownloading builder script: 6.04kB [00:00, 2.87MB\/s] \r\nDownloading metadata: 13.7kB [00:00, 7.24MB\/s] \r\nDownloading and preparing dataset financial_phrasebank\/sentences_allagree (download: 665.91 KiB, generated: 296.26 KiB, post-processed: Unknown size, total: 962.17 KiB) to ...\/.cache\/huggingface\/datasets\/financial_phrasebank\/sentences_allagree\/1.0.0\/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 682k\/682k [00:00<00:00, 7.66MB\/s]\r\nDataset financial_phrasebank downloaded and prepared to ...\/.cache\/huggingface\/datasets\/financial_phrasebank\/sentences_allagree\/1.0.0\/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 918.80it\/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label'],\r\n num_rows: 2264\r\n })\r\n})\r\n```\r\n\r\nAre you able to access the link? https:\/\/www.researchgate.net\/profile\/Pekka-Malo\/publication\/251231364_FinancialPhraseBank-v10\/data\/0c96051eee4fb1d56e000000\/FinancialPhraseBank-v10.zip","Yes was able to download from the link manually. But still, get the same error when I use load_dataset.","Fixed once data files are hosted on the Hub:\r\n- #4598"],"created_at":1658393012000,"updated_at":1659601955000,"closed_at":1659601955000,"author_association":"NONE","active_lock_reason":null,"body":"I tried both codes below to download the financial phrasebank dataset (https:\/\/huggingface.co\/datasets\/financial_phrasebank) with the sentences_allagree subset. However, the code gives a 403 error when executed from multiple machines locally or on the cloud.\r\n\r\n```\r\nfrom datasets import load_dataset, DownloadMode\r\nload_dataset(path='financial_phrasebank',name='sentences_allagree',download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n```\r\n\r\n```\r\nfrom datasets import load_dataset, DownloadMode\r\nload_dataset(path='financial_phrasebank',name='sentences_allagree')\r\n```\r\n\r\n**Error**\r\nConnectionError: Couldn't reach https:\/\/www.researchgate.net\/profile\/Pekka_Malo\/publication\/251231364_FinancialPhraseBank-v10\/data\/0c96051eee4fb1d56e000000\/FinancialPhraseBank-v10.zip (error 403)\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4728\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4728\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4727","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4727\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4727\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4727\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4727","id":1312645391,"node_id":"I_kwDODunzps5OPWEP","number":4727,"title":"Dataset Viewer issue for TheNoob3131\/mosquito-data","user":{"login":"thenerd31","id":53668030,"node_id":"MDQ6VXNlcjUzNjY4MDMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53668030?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thenerd31","html_url":"https:\/\/github.com\/thenerd31","followers_url":"https:\/\/api.github.com\/users\/thenerd31\/followers","following_url":"https:\/\/api.github.com\/users\/thenerd31\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thenerd31\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thenerd31\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thenerd31\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thenerd31\/orgs","repos_url":"https:\/\/api.github.com\/users\/thenerd31\/repos","events_url":"https:\/\/api.github.com\/users\/thenerd31\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thenerd31\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The preview is working OK:\r\n\r\n![Screenshot from 2022-07-21 09-46-09](https:\/\/user-images.githubusercontent.com\/8515462\/180158929-bd8faad4-6392-4fc1-8d9c-df38aa9f8438.png)\r\n\r\n"],"created_at":1658381088000,"updated_at":1658389916000,"closed_at":1658389501000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/TheNoob3131\/mosquito-data\/viewer\/TheNoob3131--mosquito-data\/test\n\n### Description\n\nDataset preview not showing with large files. Says 'split cache is empty' even though there are train and test splits.\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4727\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4727\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4726","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4726\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4726\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4726\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4726","id":1312082175,"node_id":"PR_kwDODunzps47ykPI","number":4726,"title":"Fix broken link to the Hub","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658357847000,"updated_at":1658413998000,"closed_at":1658390454000,"author_association":"MEMBER","active_lock_reason":null,"body":"The Markdown link fails to render if it is in the same line as the ``. This PR implements @mishig25's fix by using `` instead.\r\n\r\n![Screen Shot 2022-07-20 at 3 53 05 PM](https:\/\/user-images.githubusercontent.com\/59462357\/180096412-7fbb33be-abb0-4e54-a52d-201b3b58e0f9.png)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4726\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4726\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4726","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4726","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4726.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4726.patch","merged_at":1658390454000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4725","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4725\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4725\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4725\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4725","id":1311907096,"node_id":"I_kwDODunzps5OMh0Y","number":4725,"title":"the_pile datasets URL broken. ","user":{"login":"TrentBrick","id":12433427,"node_id":"MDQ6VXNlcjEyNDMzNDI3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12433427?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TrentBrick","html_url":"https:\/\/github.com\/TrentBrick","followers_url":"https:\/\/api.github.com\/users\/TrentBrick\/followers","following_url":"https:\/\/api.github.com\/users\/TrentBrick\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TrentBrick\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TrentBrick\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TrentBrick\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TrentBrick\/orgs","repos_url":"https:\/\/api.github.com\/users\/TrentBrick\/repos","events_url":"https:\/\/api.github.com\/users\/TrentBrick\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TrentBrick\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @TrentBrick. We are addressing the change with their data host server.\r\n\r\nOn the meantime, if you would like to work with your fixed local copy of the_pile script, you should use:\r\n```python\r\nload_dataset(\"path\/to\/your\/local\/the_pile\/the_pile.py\",...\r\n```\r\ninstead of just `load_dataset(\"the_pile\",...`.\r\n\r\nThe latter downloads a copy of `the_pile.py` from our GitHub, caches it locally (inside `~\/.cache\/huggingface\/modules`) and uses that.","@TrentBrick, I have checked the URLs and both hosts work, the original (https:\/\/the-eye.eu\/) and the mirror (https:\/\/mystic.the-eye.eu\/). See e.g.:\r\n- https:\/\/mystic.the-eye.eu\/public\/AI\/pile\/\r\n- https:\/\/mystic.the-eye.eu\/public\/AI\/pile_preliminary_components\/\r\n\r\nPlease, let me know if you still find any issue loading this dataset by using current server URLs.","Great this is working now. Re the download from GitHub... I'm sure thought went into doing this but could it be made more clear maybe here? https:\/\/huggingface.co\/docs\/datasets\/installation for example under installing from source? I spent over an hour questioning my sanity as I kept trying to edit this file, uninstall and reinstall the repo, git reset to previous versions of the file etc.","Thanks for the quick reply and help too\r\n","Thanks @TrentBrick for the suggestion about improving our docs: we should definitely do this if you find they are not clear enough.\r\n\r\nCurrently, our docs explain how to load a dataset from a local loading script here: [Load > Local loading script](https:\/\/huggingface.co\/docs\/datasets\/loading#local-loading-script)\r\n\r\nI've opened an issue here:\r\n- #4732\r\n\r\nFeel free to comment on it any additional explanation\/suggestion\/requirement related to this problem."],"created_at":1658350650000,"updated_at":1658470186000,"closed_at":1658389099000,"author_association":"NONE","active_lock_reason":null,"body":"https:\/\/github.com\/huggingface\/datasets\/pull\/3627 changed the Eleuther AI Pile dataset URL from https:\/\/the-eye.eu\/ to https:\/\/mystic.the-eye.eu\/ but the latter is now broken and the former works again. \r\n\r\nNote that when I git clone the repo and use `pip install -e .` and then edit the URL back the codebase doesn't seem to use this edit so the mystic URL is also cached somewhere else that I can't find? ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4725\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4725\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4724","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4724\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4724\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4724\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4724","id":1311127404,"node_id":"PR_kwDODunzps47vLrP","number":4724,"title":"Download and prepare as Parquet for cloud storage","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Added some docs for dask and took your comments into account\r\n\r\ncc @philschmid if you also want to take a look :)","Just noticed that it would be more convenient to pass the output dir to download_and_prepare directly, to bypass the caching logic which prepares the dataset at `\/\/\/\/`. And this way the cache is only used for the downloaded files. What do you think ?\r\n\r\n```python \r\n\r\nbuilder = load_datadet_builder(\"squad\")\r\n# or with a custom cache\r\nbuilder = load_datadet_builder(\"squad\", cache_dir=\"path\/to\/local\/cache\/for\/downloaded\/files\")\r\n\r\n# download and prepare to s3\r\nbuilder.download_and_prepare(\"s3:\/\/my_bucket\/squad\")\r\n```","Might be of interest: \r\nPyTorch and AWS introduced better support for S3 streaming in `torchtext`. \r\n![image](https:\/\/user-images.githubusercontent.com\/32632186\/183354186-a7f005e3-4167-4d80-ad1a-c62dd51ad7b6.png)\r\n","Having thought about it a bit more, I also agree with @philschmid in that it's important to follow the existing APIs (pandas\/dask), which means we should support the following at some point:\r\n\r\n* remote data files resolution for the packaged modules to support `load_dataset(\"\", data_files=\"\")`\r\n* `to_(\"\")`\r\n* `load_from_disk` and `save_to_disk` already expose the `fs` param, but it would be cool to support specifying `fsspec` URLs directly as the source\/destination path (perhaps we can then deprecate `fs` to be fully aligned with pandas\/dask)\r\n\r\nIMO these are the two main issues with the current approach:\r\n* relying on the builder API to generate the formatted files results in a non-friendly format due to how our caching works (a lot of nested subdirectories)\r\n* this approach still downloads the files needed to generate a dataset locally. Considering one of our goals is to align the streaming API with the non-streaming one, this could be avoided by running `to_` on streamed\/iterable datasets","Alright I did the last change I wanted to do, here is the final API:\r\n\r\n```python\r\nbuilder = load_dataset_builder(...)\r\nbuilder.download_and_prepare(\"s3:\/\/...\", storage_options={\"token\": ...})\r\n```\r\n\r\nand it creates the arrow files directly in the specified directory, not in a nested subdirectory structure as we do in the cache !\r\n\r\n> this approach still downloads the files needed to generate a dataset locally. Considering one of our goals is to align the streaming API with the non-streaming one, this could be avoided by running to_ on streamed\/iterable datasets\r\n\r\nYup this can be explored in some future work I think. Though to keep things simple and clear I would keep the streaming behaviors only when you load a dataset in streaming mode, and not include it in `download_and_prepare` (because it wouldn't be aligned with the name of the function, which imply to 1. download and 2. prepare ^^). Maybe an API like that can make sense for those who need full streaming\r\n\r\n```python\r\nds = load_dataset(..., streaming=True)\r\nds.to_parquet(\"s3:\/\/...\")\r\n```","totally agree with your comment on the meaning of \"loading\", I'll update the docs","I took your comments into account and reverted all the changes related to `cache_dir` to keep the support for remote `cache_dir` for beam datasets. I also updated the wording in the docs to not use \"load\" when it's not appropriate :)"],"created_at":1658324342000,"updated_at":1662398845000,"closed_at":1662398727000,"author_association":"MEMBER","active_lock_reason":null,"body":"Download a dataset as Parquet in a cloud storage can be useful for streaming mode and to use with spark\/dask\/ray.\r\n\r\nThis PR adds support for `fsspec` URIs like `s3:\/\/...`, `gcs:\/\/...` etc. and ads the `file_format` to save as parquet instead of arrow:\r\n```python\r\nfrom datasets import *\r\n\r\ncache_dir = \"s3:\/\/...\"\r\nbuilder = load_dataset_builder(\"crime_and_punish\", cache_dir=cache_dir)\r\nbuilder.download_and_prepare(file_format=\"parquet\")\r\n```\r\n\r\nEDIT: actually changed the API to\r\n\r\n```python\r\nfrom datasets import *\r\n\r\nbuilder = load_dataset_builder(\"crime_and_punish\")\r\nbuilder.download_and_prepare(\"s3:\/\/...\", file_format=\"parquet\")\r\n```\r\n\r\ncredentials to cloud storage can be passed using the `storage_options` argument in\r\n\r\nFor consistency with the BeamBasedBuilder, I name the parquet files `{builder.name}-{split}-xxxxx-of-xxxxx.parquet`. I think this is fine since we'll need to implement parquet sharding after this PR, so that a dataset can be used efficiently with dask for example.\r\n\r\nNote that images\/audio files are not embedded yet in the parquet files, this will added in a subsequent PR\r\n\r\nTODO:\r\n- [x] docs\r\n- [x] tests","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4724\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4724\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4724","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4724","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4724.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4724.patch","merged_at":1662398727000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4723","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4723\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4723\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4723\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4723","id":1310970604,"node_id":"PR_kwDODunzps47uoSj","number":4723,"title":"Refactor conftest fixtures","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658319322000,"updated_at":1658414231000,"closed_at":1658413458000,"author_association":"MEMBER","active_lock_reason":null,"body":"Previously, fixture modules `hub_fixtures` and `s3_fixtures`:\r\n- were both at the root test directory\r\n- were imported using `import *`\r\n - as a side effect, the modules `os` and `pytest` were imported from `s3_fixtures` into `conftest`\r\n\r\nThis PR:\r\n- puts both fixture modules in a dedicated directory `fixtures`\r\n- renames both to: `fixtures.hub` and `fixtures.s3`\r\n- imports them into `conftest` as plugins, using the `pytest_plugins`: this avoids the `import *`\r\n- additionally creates a new fixture module `fixtures.files` with all file-related fixtures","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4723\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4723\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4723","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4723","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4723.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4723.patch","merged_at":1658413458000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4722","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4722\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4722\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4722\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4722","id":1310785916,"node_id":"PR_kwDODunzps47t_HJ","number":4722,"title":"Docs: Fix same-page haslinks","user":{"login":"mishig25","id":11827707,"node_id":"MDQ6VXNlcjExODI3NzA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11827707?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mishig25","html_url":"https:\/\/github.com\/mishig25","followers_url":"https:\/\/api.github.com\/users\/mishig25\/followers","following_url":"https:\/\/api.github.com\/users\/mishig25\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mishig25\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mishig25\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mishig25\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mishig25\/orgs","repos_url":"https:\/\/api.github.com\/users\/mishig25\/repos","events_url":"https:\/\/api.github.com\/users\/mishig25\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mishig25\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658311477000,"updated_at":1658336553000,"closed_at":1658335776000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"`href=\"\/docs\/datasets\/quickstart#audio\"` implicitly goes to `href=\"\/docs\/datasets\/{$LATEST_STABLE_VERSION}\/quickstart#audio\"`. Therefore, https:\/\/huggingface.co\/docs\/datasets\/quickstart#audio #audio hashlink does not work since the new docs were not added to v2.3.2 (LATEST_STABLE_VERSION)\r\n\r\nto preserve the version, it should be just `href=\"#audio\"`, which will implicilty go to curren_page + #audio element","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4722\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4722\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4722","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4722","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4722.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4722.patch","merged_at":1658335776000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4721","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4721\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4721\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4721\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4721","id":1310253552,"node_id":"I_kwDODunzps5OGOHw","number":4721,"title":"PyArrow Dataset error when calling `load_dataset`","user":{"login":"piraka9011","id":16828657,"node_id":"MDQ6VXNlcjE2ODI4NjU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16828657?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/piraka9011","html_url":"https:\/\/github.com\/piraka9011","followers_url":"https:\/\/api.github.com\/users\/piraka9011\/followers","following_url":"https:\/\/api.github.com\/users\/piraka9011\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/piraka9011\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/piraka9011\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/piraka9011\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/piraka9011\/orgs","repos_url":"https:\/\/api.github.com\/users\/piraka9011\/repos","events_url":"https:\/\/api.github.com\/users\/piraka9011\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/piraka9011\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! It looks like a bug in `pyarrow`. If you manage to end up with only one chunk per parquet file it should workaround this issue.\r\n\r\nTo achieve that you can try to lower the value of `max_shard_size` and also don't use `map` before `push_to_hub`.\r\n\r\nDo you have a minimum reproducible example that we can share with the Arrow team for further debugging ?","> If you manage to end up with only one chunk per parquet file it should workaround this issue.\r\n\r\nYup, I did not encounter this bug when I was testing my script with a slice of <1000 samples for my dataset.\r\n\r\n> Do you have a minimum reproducible example...\r\n\r\nNot sure if I can get more minimal than the script I shared above. Are you asking for a sample json file?\r\nJust generate a random manifest list, I can add that to the above script if that's what you mean?\r\n","Actually this is probably linked to this open issue: https:\/\/issues.apache.org\/jira\/browse\/ARROW-5030.\r\n\r\nsetting `max_shard_size=\"2GB\"` should do the job (or `max_shard_size=\"1GB\"` if you want to be on the safe side, especially given that there can be some variance in the shard sizes if the dataset is not evenly distributed)"],"created_at":1658279763000,"updated_at":1658499107000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nI am fine tuning a wav2vec2 model following the script here using my own dataset: https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/speech-recognition\/run_speech_recognition_ctc.py\r\n\r\nLoading my Audio dataset from the hub which was originally generated from disk results in the following PyArrow error:\r\n\r\n```sh\r\nFile \"\/home\/ubuntu\/w2v2\/run_speech_recognition_ctc.py\", line 227, in main\r\n raw_datasets = load_dataset(\r\nFile \"\/home\/ubuntu\/.virtualenvs\/meval\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1679, in load_dataset\r\n builder_instance.download_and_prepare(\r\nFile \"\/home\/ubuntu\/.virtualenvs\/meval\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\nFile \"\/home\/ubuntu\/.virtualenvs\/meval\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 793, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\nFile \"\/home\/ubuntu\/.virtualenvs\/meval\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1268, in _prepare_split\r\n for key, table in logging.tqdm(\r\nFile \"\/home\/ubuntu\/.virtualenvs\/meval\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\nFile \"\/home\/ubuntu\/.virtualenvs\/meval\/lib\/python3.8\/site-packages\/datasets\/packaged_modules\/parquet\/parquet.py\", line 68, in _generate_tables\r\n for batch_idx, record_batch in enumerate(\r\nFile \"pyarrow\/_parquet.pyx\", line 1309, in iter_batches\r\nFile \"pyarrow\/error.pxi\", line 121, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs\r\n```\r\n\r\n## Steps to reproduce the bug\r\n\r\nI created a dataset from a JSON lines manifest of `audio_filepath`, `text`, and `duration`.\r\n\r\nWhen creating the dataset, I do something like this:\r\n\r\n```python\r\nimport json\r\nfrom datasets import Dataset, Audio\r\n\r\n# manifest_lines is a list of dicts w\/ \"audio_filepath\", \"duration\", and \"text\r\nfor line in manifest_lines:\r\n line = line.strip()\r\n if line:\r\n line_dict = json.loads(line)\r\n manifest_dict[\"audio\"].append(f\"{root_path}\/{line_dict['audio_filepath']}\")\r\n manifest_dict[\"duration\"].append(line_dict[\"duration\"])\r\n manifest_dict[\"transcription\"].append(line_dict[\"text\"])\r\n\r\n# Create a HF dataset\r\ndataset = Dataset.from_dict(manifest_dict).cast_column(\r\n \"audio\", Audio(sampling_rate=16_000),\r\n)\r\n\r\n# From the docs for saving to disk\r\n# https:\/\/huggingface.co\/docs\/datasets\/v2.3.2\/en\/package_reference\/main_classes#datasets.Dataset.save_to_disk\r\ndef read_audio_file(example):\r\n with open(example[\"audio\"][\"path\"], \"rb\") as f:\r\n return {\"audio\": {\"bytes\": f.read()}}\r\n\r\ndataset = dataset.map(read_audio_file, num_proc=70)\r\ndataset.save_to_disk(f\"\/audio-data\/hf\/{artifact_name}\")\r\ndataset.push_to_hub(f\"{org-name}\/{artifact_name}\", max_shard_size=\"5GB\", private=True)\r\n```\r\n\r\nThen when I call `load_dataset()` in my training script, with the same dataset I generated above, and download from the huggingface hub I get the above stack trace.\r\nI am able to load the dataset fine if I use `load_from_disk()`.\r\n\r\n## Expected results\r\n\r\n`load_dataset()` should behave just like `load_from_disk()` and not cause any errors.\r\n\r\n## Actual results\r\n\r\nSee above\r\n\r\n## Environment info\r\n\r\nI am using the `huggingface\/transformers-pytorch-gpu:latest` image\r\n- `datasets` version: 2.3.0\r\n- Platform: Docker\/Ubuntu 20.04\r\n- Python version: 3.8\r\n- PyArrow version: 8.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4721\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4721\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4720","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4720\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4720\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4720\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4720","id":1309980195,"node_id":"I_kwDODunzps5OFLYj","number":4720,"title":"Dataset Viewer issue for shamikbose89\/lancaster_newsbooks","user":{"login":"shamikbose","id":50837285,"node_id":"MDQ6VXNlcjUwODM3Mjg1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50837285?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamikbose","html_url":"https:\/\/github.com\/shamikbose","followers_url":"https:\/\/api.github.com\/users\/shamikbose\/followers","following_url":"https:\/\/api.github.com\/users\/shamikbose\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamikbose\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamikbose\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamikbose\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamikbose\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamikbose\/repos","events_url":"https:\/\/api.github.com\/users\/shamikbose\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamikbose\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It seems like the list of splits could not be obtained:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"shamikbose89\/lancaster_newsbooks\", \"default\")\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/shamikbose89--lancaster_newsbooks\/2d1c63d269bf7b9342accce0a95960b1710ab4bc774248878bd80eb96c1afaf7\/lancaster_newsbooks.py\", line 73, in _split_generators\r\n data_dir = dl_manager.download_and_extract(_URL)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 916, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 879, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 348, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 884, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 388, in _get_extraction_protocol\r\n return _get_extraction_protocol_with_magic_number(f)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/download\/streaming_download_manager.py\", line 354, in _get_extraction_protocol_with_magic_number\r\n f.seek(0)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/fsspec\/implementations\/http.py\", line 684, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nping @huggingface\/datasets ","Oh, I removed the 'split' key from `kwargs`. I put it back in, but there's still the same error","It looks like the data host doesn't support http range requests, which is necessary to glob inside a ZIP archive in streaming mode. Can you try hosting the dataset elsewhere ? Or download each file separately from https:\/\/ota.bodleian.ox.ac.uk\/repository\/xmlui\/handle\/20.500.12024\/2531 ?","@lhoestq Thanks! That seems to have solved it. I can get the splits with the `get_dataset_split_names()` function. The dataset viewer is still not loading properly, though. The new error is\r\n```\r\nStatus code: 400\r\nException: BadZipFile\r\nMessage: File is not a zip file\r\n```\r\n\r\nPS. The dataset loads properly and can be accessed"],"created_at":1658260807000,"updated_at":1662655641000,"closed_at":1662655641000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\r\n\r\nhttps:\/\/huggingface.co\/datasets\/shamikbose89\/lancaster_newsbooks\r\n\r\n### Description\r\n\r\nStatus code: 400\r\nException: ValueError\r\nMessage: Cannot seek streaming HTTP file\r\n\r\nI am able to use the dataset loading script locally and it also runs when I'm using the one from the hub, but the viewer still doesn't load\r\n\r\n### Owner\r\n\r\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4720\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4720\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4719","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4719\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4719\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4719\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4719","id":1309854492,"node_id":"I_kwDODunzps5OEssc","number":4719,"title":"Issue loading TheNoob3131\/mosquito-data dataset","user":{"login":"thenerd31","id":53668030,"node_id":"MDQ6VXNlcjUzNjY4MDMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53668030?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thenerd31","html_url":"https:\/\/github.com\/thenerd31","followers_url":"https:\/\/api.github.com\/users\/thenerd31\/followers","following_url":"https:\/\/api.github.com\/users\/thenerd31\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thenerd31\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thenerd31\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thenerd31\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thenerd31\/orgs","repos_url":"https:\/\/api.github.com\/users\/thenerd31\/repos","events_url":"https:\/\/api.github.com\/users\/thenerd31\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thenerd31\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am also getting a ValueError: 'Couldn't cast' at the bottom. Is this because of some delimiter issue? My dataset is on the Huggingface Hub. If you could look at it, that would be greatly appreciated.","Hi @thenerd31, thanks for reporting.\r\n\r\nPlease note that your issue is not caused by the Hugging Face Datasets library, but it has to do with the specific implementation of your dataset on the Hub.\r\n\r\nTherefore, I'm transferring this discussion to your own dataset Community tab: https:\/\/huggingface.co\/datasets\/TheNoob3131\/mosquito-data\/discussions\/1"],"created_at":1658252857000,"updated_at":1658299617000,"closed_at":1658299562000,"author_association":"NONE","active_lock_reason":null,"body":"![image](https:\/\/user-images.githubusercontent.com\/53668030\/179815591-d75fa7d3-3122-485f-a852-b06a68909066.png)\r\n\r\nSo my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to see if the files were downloaded, the folder was blank.\r\n\r\nHere is the error below:\r\nValueError Traceback (most recent call last)\r\nInput In [8], in ()\r\n 1 from datasets import load_dataset\r\n----> 3 dataset = load_dataset(\"TheNoob3131\/mosquito-data\", split=\"train\")\r\n\r\nFile ~\\Anaconda3\\lib\\site-packages\\datasets\\load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1678 # Download and prepare data\r\n-> 1679 builder_instance.download_and_prepare(\r\n 1680 download_config=download_config,\r\n 1681 download_mode=download_mode,\r\n 1682 ignore_verifications=ignore_verifications,\r\n 1683 try_from_hf_gcs=try_from_hf_gcs,\r\n 1684 use_auth_token=use_auth_token,\r\n 1685 )\r\n 1687 # Build dataset for splits\r\n 1688 keep_in_memory = (\r\n 1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1690 )\r\n\r\nIs the dataset in the wrong format or is there some security permission that I should enable?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4719\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4719\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4718","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4718\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4718\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4718\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4718","id":1309520453,"node_id":"PR_kwDODunzps47prWR","number":4718,"title":"Make Extractor accept Path as input","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658237106000,"updated_at":1658497347000,"closed_at":1658496583000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- Makes `Extractor` accept instance of `Path` as input\r\n- Removes unnecessary castings of `Path` to `str`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4718\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4718\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4718","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4718","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4718.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4718.patch","merged_at":1658496583000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4717","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4717\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4717\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4717\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4717","id":1309512483,"node_id":"I_kwDODunzps5ODZMj","number":4717,"title":"Dataset Viewer issue for LawalAfeez\/englishreview-ds-mini","user":{"login":"lawalAfeez820","id":69974956,"node_id":"MDQ6VXNlcjY5OTc0OTU2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/69974956?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lawalAfeez820","html_url":"https:\/\/github.com\/lawalAfeez820","followers_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/followers","following_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/orgs","repos_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/repos","events_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lawalAfeez820\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["It's currently working, as far as I understand\r\n\r\nhttps:\/\/huggingface.co\/datasets\/LawalAfeez\/englishreview-ds-mini\/viewer\/LawalAfeez--englishreview-ds-mini\/train\r\n\r\n\"Capture\r\n\r\n---\r\n\r\nWhat was your issue?"],"created_at":1658236779000,"updated_at":1658305977000,"closed_at":1658305977000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\nUnable to view the split data\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4717\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4717\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4716","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4716\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4716\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4716\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4716","id":1309455838,"node_id":"PR_kwDODunzps47pdbh","number":4716,"title":"Support \"tags\" yaml tag","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","IMO `DatasetMetadata` shouldn't crash with attributes that it doesn't know, btw","Yea this PR is mostly to have a validation that this field contains a list of strings.\r\n\r\nRegarding unknown fields, the tagging app currently returns an error if a field is unknown using the `DatasetMetadata`. We can change that though"],"created_at":1658234071000,"updated_at":1658324690000,"closed_at":1658323916000,"author_association":"MEMBER","active_lock_reason":null,"body":"Added the \"tags\" YAML tag, so that users can specify data domain\/topics keywords for dataset search","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4716\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4716\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4716","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4716","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4716.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4716.patch","merged_at":1658323916000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4715","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4715\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4715\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4715\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4715","id":1309405980,"node_id":"PR_kwDODunzps47pSui","number":4715,"title":"Fix POS tags","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","CI failures are about missing content in the dataset cards or bad tags, and this is unrelated to this PR. Merging :)"],"created_at":1658231574000,"updated_at":1658235274000,"closed_at":1658234476000,"author_association":"MEMBER","active_lock_reason":null,"body":"We're now using `part-of-speech` and not `part-of-speech-tagging`, see discussion here: https:\/\/github.com\/huggingface\/datasets\/commit\/114c09aff2fa1519597b46fbcd5a8e0c0d3ae020#r78794777","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4715\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4715\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4715","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4715","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4715.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4715.patch","merged_at":1658234475000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4714","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4714\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4714\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4714\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4714","id":1309265682,"node_id":"PR_kwDODunzps47o0YG","number":4714,"title":"Fix named split sorting and remove unnecessary casting","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","hahaha what a timing, I added my comment right after you merged x)\r\n\r\nyou can ignore my (nit), it's fine","Sorry, just too sync... :sweat_smile: "],"created_at":1658224108000,"updated_at":1658482785000,"closed_at":1658481057000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- makes `NamedSplit` sortable: so that `sorted()` can be called on them\r\n- removes unnecessary `sorted()` on `dict.keys()`: `dict_keys` view is already like a `set`\r\n- removes unnecessary casting of `NamedSplit` to `str`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4714\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4714\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4714","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4714","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4714.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4714.patch","merged_at":1658481057000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4713","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4713\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4713\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4713\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4713","id":1309184756,"node_id":"PR_kwDODunzps47ojC1","number":4713,"title":"Document installation of sox OS dependency for audio","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658220155000,"updated_at":1658391419000,"closed_at":1658390655000,"author_association":"MEMBER","active_lock_reason":null,"body":"The `sox` OS package needs being installed manually using the distribution package manager.\r\n\r\nThis PR adds this explanation to the docs.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4713\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4713\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4713","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4713","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4713.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4713.patch","merged_at":1658390655000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4712","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4712\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4712\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4712\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4712","id":1309177302,"node_id":"PR_kwDODunzps47ohdr","number":4712,"title":"Highlight non-commercial license in amazon_reviews_multi dataset card","user":{"login":"sbroadhurst-hf","id":108879611,"node_id":"U_kgDOBn1e-w","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/108879611?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sbroadhurst-hf","html_url":"https:\/\/github.com\/sbroadhurst-hf","followers_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/followers","following_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/orgs","repos_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/repos","events_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sbroadhurst-hf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658219780000,"updated_at":1658938180000,"closed_at":1658937461000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Highlight that the licence granted by Amazon only covers non-commercial research use.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4712\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4712\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4712","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4712","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4712.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4712.patch","merged_at":1658937461000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4711","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4711\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4711\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4711\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4711","id":1309138570,"node_id":"I_kwDODunzps5OB96K","number":4711,"title":"Document how to create a dataset loading script for audio\/vision","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1658217820000,"updated_at":1659366491000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, in our docs for Audio\/Vision\/Text, we explain how to:\r\n- Load data\r\n- Process data\r\n\r\nHowever we only explain how to *Create a dataset loading script* for text data.\r\n\r\nI think it would be useful that we add the same for Audio\/Vision as these have some specificities different from Text.\r\n\r\nSee, for example:\r\n- #4697\r\n - and comment there: https:\/\/github.com\/huggingface\/datasets\/issues\/4697#issuecomment-1191502492\r\n\r\nCC: @stevhliu \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4711\/reactions","total_count":4,"+1":4,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4711\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4710","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4710\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4710\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4710\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4710","id":1308958525,"node_id":"PR_kwDODunzps47ny0L","number":4710,"title":"Add object detection processing tutorial","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Great idea! Now that we have more than one task, it makes sense to separate image classification and object detection so it'll be easier for users to follow.","@lhoestq do we want to do that in this PR, or should we merge it and let @stevhliu reorganize separately? "],"created_at":1658204626000,"updated_at":1658434235000,"closed_at":1658433402000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The following adds a quick guide on how to process object detection datasets with `albumentations`. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4710\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4710\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4710","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4710","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4710.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4710.patch","merged_at":1658433402000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4709","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4709\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4709\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4709\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4709","id":1308633093,"node_id":"I_kwDODunzps5OACgF","number":4709,"title":"WMT21 & WMT22","user":{"login":"Muennighoff","id":62820084,"node_id":"MDQ6VXNlcjYyODIwMDg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/62820084?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Muennighoff","html_url":"https:\/\/github.com\/Muennighoff","followers_url":"https:\/\/api.github.com\/users\/Muennighoff\/followers","following_url":"https:\/\/api.github.com\/users\/Muennighoff\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Muennighoff\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Muennighoff\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Muennighoff\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Muennighoff\/orgs","repos_url":"https:\/\/api.github.com\/users\/Muennighoff\/repos","events_url":"https:\/\/api.github.com\/users\/Muennighoff\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Muennighoff\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"},{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! That would be awesome to have them indeed, thanks for opening this issue\r\n\r\nI just added you to the WMT org on the HF Hub if you're interested in adding those datasets.\r\n\r\nFeel free to create a dataset repository for each dataset and upload the data files there :) preferably in ZIP archives instead of TAR archives (the current WMT scripts don't support streaming TAR archives, so it would break the dataset preview). We've also had issues with the `statmt.org` host (data unavailable, slow download speed), that's why I think it's better if we re-host the files on the Hub.\r\n\r\n`wmt21` (and wmt22) can be added in this GitHub repository I think, for consistency with the previous ones.\r\nTo add it, you can copy paste the code of the previous one (e.g. wmt19), and add the new data:\r\n- in wmt_utils.py, add the new data subsets. You need to provide the download URLs, as well as the target and source languages\r\n- in wmt21.py (renamed from wmt19.py), you can specify the subsets that WMT21 uses (i.e. the one you just added)\r\n- in wmt_utils.py, define the python function that must be used to parse the subsets you added. To do so, you must go in `_generate_examples` and chose the proper `sub_generator` based on the subset name. For example, the `paracrawl_v3` subset uses the `_parse_tmx` function:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/ede72d3f9796339701ec59899c7c31d2427046fb\/datasets\/wmt19\/wmt_utils.py#L834-L835\r\n\r\nHopefully the data is in a format that is already supported and there's no need to write a new `_parse_*` function for the new subsets. Let me know if you have questions or if I can help :)"],"created_at":1658178333000,"updated_at":1663077880000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** WMT21 & WMT22\r\n- **Description:** We are going to have three tracks: two small tasks and a large task.\r\nThe small tracks evaluate translation between fairly related languages and English (all pairs). The large track uses 101 languages.\r\n- **Paper:** \/\r\n- **Data:** https:\/\/statmt.org\/wmt21\/large-scale-multilingual-translation-task.html https:\/\/statmt.org\/wmt22\/large-scale-multilingual-translation-task.html\r\n- **Motivation:** Many more languages than previous WMT versions - Could be very high impact\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/main\/ADD_NEW_DATASET.md).\r\n\r\n\r\nI could also tackle this. I saw the existing logic for WMT models is a bit complex (datasets are stored on the wmt account & retrieved in separate wmt datasets afaict). How long do you think it would take me? @lhoestq \r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4709\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4709\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4708","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4708\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4708\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4708\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4708","id":1308279700,"node_id":"PR_kwDODunzps47lewm","number":4708,"title":"Fix require torchaudio and refactor test requirements","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658165068000,"updated_at":1658471456000,"closed_at":1658470691000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently there is a bug in `require_torchaudio` (indeed it is requiring `sox` instead):\r\n```python\r\ndef require_torchaudio(test_case):\r\n if find_spec(\"sox\") is None:\r\n...\r\n```\r\n\r\nThe bug was introduced by:\r\n- #3685\r\n - Commit: https:\/\/github.com\/huggingface\/datasets\/pull\/3685\/commits\/b5a3e7122d49c4dcc9333b1d8d18a833fc04b940\r\n\r\nwhich moved\r\n```python\r\nrequire_sndfile = pytest.mark.skipif(\r\n # In Windows and OS X, soundfile installs sndfile\r\n (sys.platform != \"linux\" and find_spec(\"soundfile\") is None)\r\n # In Linux, soundfile throws RuntimeError if sndfile not installed with distribution package manager\r\n or (sys.platform == \"linux\" and find_library(\"sndfile\") is None),\r\n reason=\"Test requires 'sndfile': `pip install soundfile`; \"\r\n \"Linux requires sndfile installed with distribution package manager, e.g.: `sudo apt-get install libsndfile1`\",\r\n)\r\nrequire_sox = pytest.mark.skipif(\r\n find_library(\"sox\") is None,\r\n reason=\"Test requires 'sox'; only available in non-Windows, e.g.: `sudo apt-get install sox`\",\r\n)\r\nrequire_torchaudio = pytest.mark.skipif(find_spec(\"torchaudio\") is None, reason=\"Test requires 'torchaudio'\")\r\n```\r\nto\r\n```python\r\ndef require_sndfile(test_case):\r\n \"\"\"\r\n Decorator marking a test that requires soundfile.\r\n These tests are skipped when soundfile isn't installed.\r\n \"\"\"\r\n if (sys.platform != \"linux\" and find_spec(\"soundfile\") is None) or (\r\n sys.platform == \"linux\" and find_library(\"sndfile\") is None\r\n ):\r\n test_case = unittest.skip(\r\n \"test requires 'sndfile': `pip install soundfile`; \"\r\n \"Linux requires sndfile installed with distribution package manager, e.g.: `sudo apt-get install libsndfile1`\",\r\n )(test_case)\r\n return test_case\r\n\r\n\r\ndef require_sox(test_case):\r\n \"\"\"\r\n Decorator marking a test that requires sox.\r\n These tests are skipped when sox isn't installed.\r\n \"\"\"\r\n if find_library(\"sox\") is None:\r\n return unittest.skip(\"test requires 'sox'; only available in non-Windows, e.g.: `sudo apt-get install sox`\")(\r\n test_case\r\n )\r\n return test_case\r\n\r\n\r\ndef require_torchaudio(test_case):\r\n \"\"\"\r\n Decorator marking a test that requires torchaudio.\r\n These tests are skipped when torchaudio isn't installed.\r\n \"\"\"\r\n if find_spec(\"sox\") is None:\r\n return unittest.skip(\"test requires 'torchaudio'\")(test_case)\r\n return test_case\r\n```\r\n\r\nThis PR;\r\n- fixes the bug in `require_torchaudio`\r\n- refactors the test requirements back to using `pytest` instead of `unittest`\r\n - the text in `pytest.skipif` `reason` can be used if needed in a test body: `require_torchaudio.kwargs[\"reason\"]`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4708\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4708\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4708","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4708","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4708.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4708.patch","merged_at":1658470691000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4707","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4707\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4707\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4707\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4707","id":1308251405,"node_id":"I_kwDODunzps5N-lUN","number":4707,"title":"Dataset Viewer issue for TheNoob3131\/mosquito-data","user":{"login":"thenerd31","id":53668030,"node_id":"MDQ6VXNlcjUzNjY4MDMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53668030?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thenerd31","html_url":"https:\/\/github.com\/thenerd31","followers_url":"https:\/\/api.github.com\/users\/thenerd31\/followers","following_url":"https:\/\/api.github.com\/users\/thenerd31\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thenerd31\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thenerd31\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thenerd31\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thenerd31\/orgs","repos_url":"https:\/\/api.github.com\/users\/thenerd31\/repos","events_url":"https:\/\/api.github.com\/users\/thenerd31\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thenerd31\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting. I refreshed the dataset viewer and it now works as expected.\r\n\r\nhttps:\/\/huggingface.co\/datasets\/TheNoob3131\/mosquito-data\r\n\r\n\"Capture\r\n\r\nWe will investigate why it occurred in the first place\r\n","By chance, could you provide some details about the operations done on the dataset: was it private? gated?","Yes, it was a private dataset, and when I made it public, the Dataset Preview did not work. \r\n\r\nHowever, now when I make the dataset private, it says that the Dataset Preview has been disabled. Why is this?","Thanks for the details. For now, the dataset viewer is always disabled on private datasets (see https:\/\/huggingface.co\/docs\/hub\/datasets-viewer for more details)","Hi, it was working fine for a few hours, but then I can't see the dataset viewer again (public dataset). Why is this still happening?\r\nIt's the same error too:\r\n![image](https:\/\/user-images.githubusercontent.com\/53668030\/179602465-f220f971-d3aa-49ba-a31b-60510f4c2a89.png)\r\n","OK? This is a bug, thanks for help spotting and reproducing it (it occurs when a dataset is switched to private, then to public). We will be working on it, meanwhile, I've restored the dataset viewer manually again."],"created_at":1658164039000,"updated_at":1658173486000,"closed_at":1658164550000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\nGetting this error when trying to view dataset preview: \r\nMessage: 401, message='Unauthorized', url=URL('https:\/\/huggingface.co\/datasets\/TheNoob3131\/mosquito-data\/resolve\/8aceebd6c4a359d216d10ef020868bd9e8c986dd\/0_Africa_train.csv')\r\n\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4707\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4707\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4706","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4706\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4706\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4706\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4706","id":1308198454,"node_id":"PR_kwDODunzps47lNBg","number":4706,"title":"Fix empty examples in xtreme dataset for bucc18 config","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I guess the report link is this instead: https:\/\/huggingface.co\/datasets\/xtreme\/discussions\/1"],"created_at":1658161366000,"updated_at":1658212874000,"closed_at":1658212157000,"author_association":"MEMBER","active_lock_reason":null,"body":"As reported in https:\/\/huggingface.co\/muibk, there are empty examples in xtreme\/bucc18.de\r\n\r\nI applied your fix @mustaszewski\r\n\r\nI also used a dict to make the dataset generation much faster","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4706\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4706\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4706","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4706","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4706.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4706.patch","merged_at":1658212157000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4705","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4705\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4705\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4705\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4705","id":1308161794,"node_id":"PR_kwDODunzps47lFDo","number":4705,"title":"Fix crd3","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658159624000,"updated_at":1658423924000,"closed_at":1658423190000,"author_association":"MEMBER","active_lock_reason":null,"body":"As reported in https:\/\/huggingface.co\/datasets\/crd3\/discussions\/1#62cc377073b2512b81662794, each split of the dataset was containing the same data. This issues comes from a bug in the dataset script\r\n\r\nI fixed it and also uploaded the data to hf.co to make the dataset work in streaming mode","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4705\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4705\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4705","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4705","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4705.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4705.patch","merged_at":1658423190000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4704","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4704\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4704\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4704\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4704","id":1308147876,"node_id":"PR_kwDODunzps47lCFt","number":4704,"title":"Skip tests only for lz4\/zstd params if not installed","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658158900000,"updated_at":1658235751000,"closed_at":1658234958000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, if `zstandard` or `lz4` are not installed, `test_compression_filesystems` and `test_streaming_dl_manager_extract_all_supported_single_file_compression_types` are skipped for all compression format parameters.\r\n\r\nThis PR fixes these tests, so that if `zstandard` or `lz4` are not installed, the tests are skipped only for the corresponding compression parameters (`zstd` or `lz4`), whereas the tests are not skipped for all the other compression parameters (`gzip`, `xz` and `bz2`).\r\n\r\nRelated to:\r\n- #4688","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4704\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4704\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4704","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4704","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4704.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4704.patch","merged_at":1658234958000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4703","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4703\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4703\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4703\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4703","id":1307844097,"node_id":"PR_kwDODunzps47kABf","number":4703,"title":"Make cast in `from_pandas` more robust","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658145349000,"updated_at":1658488662000,"closed_at":1658487924000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Make the cast in `from_pandas` more robust (as it was done for the packaged modules in https:\/\/github.com\/huggingface\/datasets\/pull\/4364)\r\n\r\nThis should be useful in situations like [this one](https:\/\/discuss.huggingface.co\/t\/loading-custom-audio-dataset-and-fine-tuning-model\/8836\/4).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4703\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4703\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4703","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4703","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4703.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4703.patch","merged_at":1658487924000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4702","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4702\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4702\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4702\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4702","id":1307793811,"node_id":"I_kwDODunzps5N81mT","number":4702,"title":"Domain specific dataset discovery on the Hugging Face hub ","user":{"login":"davanstrien","id":8995957,"node_id":"MDQ6VXNlcjg5OTU5NTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8995957?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/davanstrien","html_url":"https:\/\/github.com\/davanstrien","followers_url":"https:\/\/api.github.com\/users\/davanstrien\/followers","following_url":"https:\/\/api.github.com\/users\/davanstrien\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/davanstrien\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/davanstrien\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/davanstrien\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/davanstrien\/orgs","repos_url":"https:\/\/api.github.com\/users\/davanstrien\/repos","events_url":"https:\/\/api.github.com\/users\/davanstrien\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/davanstrien\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I added a link to this issue in our internal request for adding keywords\/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a good idea to make our current taxonomy more complex.","> Hi! I added a link to this issue in our internal request for adding keywords\/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a good idea to make our current taxonomy more complex.\r\n\r\nThanks, for letting me know. Will you allow the topic tags to be user-generated or only chosen from a list?","Thanks for opening this issue @davanstrien.\r\n\r\nAs we discussed last week, the tag approach would be in principle the simpler to be implemented, either the domain tag (with closed vocabulary: more reliable but also more rigid), or the topic tag (with open vocabulary: more flexible for user needs)","Hi @davanstrien If i remember correctly this was also discussed inside a hf.co Discussion, would you be able to link it here too?\r\n\r\n(where i suggested using `tags: - foo - bar` IIRC.\r\n\r\nThanks a ton!","> Hi @davanstrien If i remember correctly this was also discussed inside a hf.co Discussion, would you be able to link it here too?\r\n> \r\n> (where i suggested using `tags: - foo - bar` IIRC.\r\n> \r\n> Thanks a ton!\r\n\r\nThis doesn't ring a bell - I did a quick search of https:\/\/discuss.huggingface.co but didn't find anything. \r\n\r\nThe `tags: ` approach sounds like a good option for this. It would be especially nice if these could suggest existing tags, but this probably won't be easily possible through the current interface. \r\n","I opened a PR to add \"tags\" to the YAML validator:\r\nhttps:\/\/github.com\/huggingface\/datasets\/pull\/4716\r\n\r\nI also added \"tags\" to the [tagging app](https:\/\/huggingface.co\/spaces\/huggingface\/datasets-tagging), with suggestions like \"bio\" or \"newspapers\"","Thanks @lhoestq for the initiative.\r\n \r\nJust one question: are \"tags\" already supported on the Hub? \r\n\r\nI think they aren't. Thus, the Hub should support them so that they are properly displayed.","I think they're not displayed, but at least it should enable users to filter by tag in using `huggingface_hub` or using the appropriate query params on the website (not sure if it's possible yet though)","> I think they're not displayed, but at least it should enable users to filter by tag in using `huggingface_hub` or using the appropriate query params on the website (not sure if it's possible yet though)\r\n\r\nI think this would already be a helpful start. I'm happy to try this out with the datasets added to https:\/\/huggingface.co\/organizations\/biglam and use the `huggingface_hub` to filter those datasets using the tags. "],"created_at":1658142843000,"updated_at":1658243891000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\n## The problem \r\n\r\nThe datasets hub currently has `8,239` datasets. These datasets span a wide range of different modalities and tasks (currently with a bias towards textual data). \r\n\r\nThere are various ways of identifying datasets that may be relevant for a particular use case:\r\n\r\n- searching \r\n- various filters \r\n\r\nCurrently, however, there isn't an easy way to identify datasets belonging to a specific domain. For example, I want to browse machine learning datasets related to 'social science' or 'climate change research'. \r\n\r\nThe ability to identify datasets relating to a specific domain has come up in discussions around the [BigLA](https:\/\/github.com\/bigscience-workshop\/lam\/) datasets hackathon https:\/\/github.com\/bigscience-workshop\/lam\/discussions\/31#discussioncomment-3123610. As part of the hackathon, we're currently collecting datasets related to Libraries, Archives and Museums and making them available via the hub. We currently do this under a Hugging Face organization (https:\/\/huggingface.co\/biglam). However, going forward, I can see some of these datasets being migrated to sit under an organization that is the custodian of the dataset (for example, a national library the data was originally from). At this point, it becomes more difficult to quickly identify datasets from this domain without relying on search. \r\n\r\nThis is also related to some existing issues on Github related to metadata on the hub:\r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/3625 \r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/3877\r\n\r\n**Describe the solution you'd like**\r\n\r\n### Some possible solutions that may help with this:\r\n\r\n#### Enable domain tags (from a controlled vocabulary)\r\n- This would add metadata field to the YAML for the domain a dataset relates to\r\n- Advantages:\r\n\t- the list is controlled, allowing it to be more easily integrated into the datasets tag app (https:\/\/huggingface.co\/space\/huggingface\/datasets-tagging) \r\n\t- the controlled vocabulary could align with an existing controlled vocabulary \r\n\t- this additional metadata can be used to perform filtering by domain \r\n- disadvantages \r\n\t- choosing the best controlled vocab may be difficult\r\n\t- there are many datasets that are likely to fit into the 'machine learning' domain (i.e. there is a long tail of datasets that aren't in more 'generic' machine learning domain \r\n\r\n#### Enable topic tags (user-generated)\r\n\r\nEnable 'free form' topic tags for datasets and models. This would be closer to GitHub's repository topics which can be chosen from a controlled list (https:\/\/github.com\/topics\/) but can also be more user\/org specific. This could potentially be useful for organizations to also manage their own models and datasets as the number they hold in their org grows. For example, they may create 'topic tags' for a specific project, so it's clearer which datasets \/models are related to that project. \r\n\r\n\r\n#### Collections\r\n\r\nThis solution would likely be the biggest shift and may require significant changes in the hub fronted. Collections could work in several different ways but would include:\r\n\r\nUsers can curate particular datasets, models, spaces, etc., into a collection. For example, they may create a collection of 'historic newspapers suitable for training language models'. These collections would not be mutually exclusive, i.e. a dataset can belong to zero, one or many collections. Collections can also potentially be nested under other collections. \r\n\r\nThis is fairly common on other data reposotiores for example the following collections: \r\n\"Screenshot\r\n\r\nall belong under a higher level collection (https:\/\/bl.iro.bl.uk\/collections\/353c908d-b495-4413-b047-87236d2573e3?locale=en). \r\n\r\nThere are different models one could use for how these collections could be created:\r\n\r\n- only within an org\r\n- for any dataset\/model\r\n- the owner or a dataset\/model has to agree to be added to a collection\r\n- a collection owner can have people suggest additions to their collection \r\n- other models....\r\n\r\nThese collections could be thematic, related to particular training approaches, curate models with particular inference properties etc. Whilst some of these features may duplicate current\/or future tag filters on the hub, they offer the advantage of being flexible and not having to predict what users will want to do upfront. \r\n\r\nThere is also potential for automating the creation of these collections based on existing metadata. For example, one could collect models trained on a collection of datasets so for example, if we had a collection of 'historic newspapers suitable for training language models' that contained 30 datasets, we could create another collection 'historic newspaper language models' that takes any model on the hub whose metadata says it used one or more of those 30 datasets. \r\n\r\nThere is also the option of exploring ML approaches to suggest models\/datasets may be relevant to a particular collection. \r\n\r\nThis approach is likely to be quite difficult to implement well and would require significant thought. There is also likely to be a benefit in doing quite a bit of upfront work in curating useful collections to demonstrate the benefits of collections. \r\n\r\n \r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\nIt is possible to collate this information externally, i.e. one could link back to the relevant models\/datasets from an external platform. \r\n\r\n**Additional context**\r\nAdd any other context about the feature request here.\r\n\r\nI'm cc'ing others involved in the BigLAM hackathon who may also have thoughts @cakiki @clancyoftheoverflow @albertvillanova ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4702\/reactions","total_count":2,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4702\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4701","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4701\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4701\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4701\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4701","id":1307689625,"node_id":"PR_kwDODunzps47jeE9","number":4701,"title":"Added more information in the README about contributors of the Arabic Speech Corpus","user":{"login":"nawarhalabi","id":2845798,"node_id":"MDQ6VXNlcjI4NDU3OTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2845798?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nawarhalabi","html_url":"https:\/\/github.com\/nawarhalabi","followers_url":"https:\/\/api.github.com\/users\/nawarhalabi\/followers","following_url":"https:\/\/api.github.com\/users\/nawarhalabi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nawarhalabi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nawarhalabi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nawarhalabi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nawarhalabi\/orgs","repos_url":"https:\/\/api.github.com\/users\/nawarhalabi\/repos","events_url":"https:\/\/api.github.com\/users\/nawarhalabi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nawarhalabi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1658137683000,"updated_at":1659004385000,"closed_at":1659004385000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Added more information in the README about contributors and encouraged reading the thesis for more infos","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4701\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4701\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4701","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4701","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4701.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4701.patch","merged_at":1659004384000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4700","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4700\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4700\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4700\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4700","id":1307599161,"node_id":"PR_kwDODunzps47jKNx","number":4700,"title":"Support extract lz4 compressed data files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1658133691000,"updated_at":1658155439000,"closed_at":1658154707000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4700\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4700\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4700","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4700","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4700.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4700.patch","merged_at":1658154707000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4699","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4699\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4699\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4699\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4699","id":1307555592,"node_id":"PR_kwDODunzps47jA6Z","number":4699,"title":"Fix Authentification Error while streaming","user":{"login":"hkjeon13","id":37480967,"node_id":"MDQ6VXNlcjM3NDgwOTY3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37480967?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hkjeon13","html_url":"https:\/\/github.com\/hkjeon13","followers_url":"https:\/\/api.github.com\/users\/hkjeon13\/followers","following_url":"https:\/\/api.github.com\/users\/hkjeon13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hkjeon13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hkjeon13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hkjeon13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hkjeon13\/orgs","repos_url":"https:\/\/api.github.com\/users\/hkjeon13\/repos","events_url":"https:\/\/api.github.com\/users\/hkjeon13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hkjeon13\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, thanks for working on this, but the fix for this has already been merged in https:\/\/github.com\/huggingface\/datasets\/pull\/4608."],"created_at":1658131421000,"updated_at":1658322644000,"closed_at":1658322643000,"author_association":"NONE","active_lock_reason":null,"body":"I fixed a few errors when it occurs while streaming the private dataset on the Huggingface Hub.\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(, use_auth_token=, streaming=True)\r\nfor d in dataset['train']:\r\n print(d)\r\n break # this is for checking\r\n```\r\nThis code is an example for streaming private datasets. \r\nwhen the version of the datasets is 2.2.2, it works well but datasets>2.2.2 occurs error like this,\r\n```\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/aiohttp\/client_reqrep.py in raise_for_status(self)\r\n1007 status=self.status,\r\n1008 message=self.reason,\r\n\u2192 1009 headers=self.headers,\r\n1010 )\r\n1011\r\n\r\nClientResponseError: 401, message='Unauthorized', url=URL('https:\/\/huggingface.co\/datasets\/...\/train-00000-of-00001-168b451062c67c34.parquet')\r\n```\r\n(this is an example on the dataset has `parquet` extenstion)\r\nIt seems that the `xisfile `module in `download\/streaming_download_manager.py` couldn't recognize the file on \"https:\/\/huggingface.co\/~\".\r\n\r\nso I add three lines.\r\nWith this change, there is no error anymore(but this code is ad-hoc).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4699\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4699\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4699","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4699","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4699.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4699.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4698","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4698\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4698\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4698\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4698","id":1307539585,"node_id":"PR_kwDODunzps47i9gN","number":4698,"title":"Enable streaming dataset to use the \"all\" split","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4698). All of your documentation changes will be reflected on that endpoint.","@albertvillanova \r\nAdding the validation split causes these two `assert_called_once` assertions to fail with `AssertionError: Expected 'ArrowWriter' to have been called once. Called 2 times`:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/main\/tests\/test_builder.py#L548-L562\r\n\r\nIt might be better to create a new dummy generator for the streaming tests, WDYT? Alternatively we could test for `self.call_count` equalling 2.","@cakiki have you read my comment in the issue page?\r\nhttps:\/\/github.com\/huggingface\/datasets\/issues\/4637#issuecomment-1175984812","Streaming with `split=all` seems to be working, will fix the failing test next","Not sure if marking the PR as \"ready for review\" actually notified you, so tagging @albertvillanova just in case :smiley_cat: ","cc @lhoestq "],"created_at":1658130459000,"updated_at":1663070757000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fixes #4637","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4698\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4698\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4698","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4698","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4698.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4698.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4697","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4697\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4697\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4697\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4697","id":1307332253,"node_id":"I_kwDODunzps5N7E6d","number":4697,"title":"Trouble with streaming frgfm\/imagenette vision dataset with TAR archive","user":{"login":"frgfm","id":26927750,"node_id":"MDQ6VXNlcjI2OTI3NzUw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26927750?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/frgfm","html_url":"https:\/\/github.com\/frgfm","followers_url":"https:\/\/api.github.com\/users\/frgfm\/followers","following_url":"https:\/\/api.github.com\/users\/frgfm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/frgfm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/frgfm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/frgfm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/frgfm\/orgs","repos_url":"https:\/\/api.github.com\/users\/frgfm\/repos","events_url":"https:\/\/api.github.com\/users\/frgfm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/frgfm\/received_events","type":"User","site_admin":false},"labels":[{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @frgfm, thanks for reporting.\r\n\r\nAs the error message says, streaming mode is not supported out of the box when the dataset contains TAR archive files.\r\n\r\nTo make the dataset streamable, you have to use `dl_manager.iter_archive`.\r\n\r\nThere are several examples in other datasets, e.g. food101: https:\/\/huggingface.co\/datasets\/food101\/blob\/main\/food101.py\r\n\r\nAnd yes, as the link you pointed out, for the streaming to be possible, the metadata file must be loaded before all of the images:\r\n- either this is the case when iterating the archive (and you get the metadata file before the images)\r\n- or you have to extract the metadata file by hand and upload it separately to the Hub","Hi @albertvillanova :wave:\r\n\r\nThanks! Yeah I saw that but since I didn't have any metadata, I wasn't sure whether I should create them myself.\r\n\r\nSo one last question:\r\nWhat is the metadata supposed to be for archives? The relative path of all files in it?\r\n_(Sorry I'm a bit confused since it's quite hard to debug using the single error message from the data preview :sweat_smile: )_","Hi @frgfm, streaming a dataset that contains a TAR file requires some tweaks because (contrary to ZIP files), tha TAR archive does not allow random access to any of the contained member files. Instead they have to be accessed sequentially (in the order in which they were put into the TAR file when created) and yielded.\r\n\r\nSo when iterating over the TAR file content, when an image file is found, we need to yield it (and not keeping it in memory, which will require huge RAM memory for large datasets). But when yielding an image file, we also need to yield with it what we call \"metadata\": the class label, and other textual information (for example, for audio files, sometimes we also add info such as the speaker ID, their sex, their age,...).\r\n\r\nAll this information usually is stored in what we call the metadata file: either a JSON or a CSV\/TSV file.\r\n\r\nBut if this is also inside the TAR archive, we need to find this file in the first place when iterating the TAR archive, so that we already have this information when we find an image file and we can yield the image file and its metadata info.\r\n\r\nTherefore:\r\n- either the TAR archive contains the metadata file as the first member when iterating it (something we cannot change as it is done at the creation of the TAR file)\r\n- or if not, then we need to have the metadata file elsewhere\r\n - in these cases, what we do (if the dataset license allows it) is:\r\n - we download the TAR file locally, we extract the metadata file and we host the metadata on the Hub\r\n - we modify the dataset loading script so that it first downloads the metadata file (and reads it) and only then starts iterating the content of the TAR archive file\r\n\r\nSee an example of this process we recently did for \"google\/fleurs\" (their metadata files for \"train\" were at the end of the TAR archives, after all audio files): https:\/\/huggingface.co\/datasets\/google\/fleurs\/discussions\/4\r\n- we uploaded the metadata file to the Hub\r\n- we adapted the loading script to use it","Hi @albertvillanova :wave: \r\n\r\nThanks, since my last message, I went through the repo of https:\/\/huggingface.co\/datasets\/food101\/blob\/main\/food101.py and managed to get it to work in the end :pray: \r\n\r\nHere it is: https:\/\/huggingface.co\/datasets\/frgfm\/imagenette\r\n\r\nI appreciate you opening an issue to document the process, it might help a few!","Great to see that you manage to make your dataset streamable. :rocket: \r\n\r\nI'm closing this issue, as for the docs update there is another issue opened:\r\n- #4711"],"created_at":1658112669000,"updated_at":1659366657000,"closed_at":1659366657000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/frgfm\/imagenette\n\n### Description\n\nHello there :wave: \r\n\r\nThanks for the amazing work you've done with HF Datasets! I've just started playing with it, and managed to upload my first dataset. But for the second one, I'm having trouble with the preview since there is some archive extraction involved :sweat_smile: \r\n\r\nBasically, I get a:\r\n```\r\nStatus code: 400\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https:\/\/s3.amazonaws.com\/fast-ai-imageclas\/imagenette2.tgz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```\r\n\r\nI've tried several things and checked this issue https:\/\/github.com\/huggingface\/datasets\/issues\/4181 as well, but no luck so far!\r\n\r\nCould you point me in the right direction please? :pray: \n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4697\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4697\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4696","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4696\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4696\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4696\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4696","id":1307183099,"node_id":"I_kwDODunzps5N6gf7","number":4696,"title":"Cannot load LinCE dataset","user":{"login":"finiteautomata","id":167943,"node_id":"MDQ6VXNlcjE2Nzk0Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/167943?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/finiteautomata","html_url":"https:\/\/github.com\/finiteautomata","followers_url":"https:\/\/api.github.com\/users\/finiteautomata\/followers","following_url":"https:\/\/api.github.com\/users\/finiteautomata\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/finiteautomata\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/finiteautomata\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/finiteautomata\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/finiteautomata\/orgs","repos_url":"https:\/\/api.github.com\/users\/finiteautomata\/repos","events_url":"https:\/\/api.github.com\/users\/finiteautomata\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/finiteautomata\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @finiteautomata, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"lince\", \"ner_spaeng\")\r\nDownloading builder script: 20.8kB [00:00, 9.09MB\/s] \r\nDownloading metadata: 31.2kB [00:00, 13.5MB\/s] \r\nDownloading and preparing dataset lince\/ner_spaeng (download: 2.93 MiB, generated: 18.45 MiB, post-processed: Unknown size, total: 21.38 MiB) to ...\/.cache\/huggingface\/datasets\/lince\/ner_spaeng\/1.0.0\/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.08M\/3.08M [00:01<00:00, 2.73MB\/s]\r\nDataset lince downloaded and prepared to ...\/.cache\/huggingface\/datasets\/lince\/ner_spaeng\/1.0.0\/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 630.66it\/s]\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 33611\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 10085\r\n })\r\n test: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 23527\r\n })\r\n})\r\n``` \r\n\r\nPlease note that for this dataset, the original data files are not hosted on the Hugging Face Hub, but on https:\/\/ritual.uh.edu\r\nAnd sometimes, the server might be temporarily unavailable, as your error message said (trying to connect to the server timed out):\r\n```\r\nConnectionError: Couldn't reach https:\/\/ritual.uh.edu\/lince\/libaccess\/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9\/ner_spaeng.zip (ConnectTimeout(MaxRetryError(\"HTTPSConnectionPool(host='ritual.uh.edu', port=443): Max retries exceeded with url: \/lince\/libaccess\/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9\/ner_spaeng.zip (Caused by ConnectTimeoutError(, 'Connection to ritual.uh.edu timed out. (connect timeout=100)'))\")))\r\n```\r\nIn these cases you could:\r\n- either contact the owners of the data server where the data is hosted to inform them about the issue in their server\r\n- or re-try after waiting some time: usually these issues are just temporary","Great, thanks for checking out!"],"created_at":1658084514000,"updated_at":1658136040000,"closed_at":1658129062000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nCannot load LinCE dataset due to a connection error\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"lince\", \"ner_spaeng\")\r\n```\r\n\r\nA notebook with this code and corresponding error can be found at https:\/\/colab.research.google.com\/drive\/1pgX3bNB9amuUwAVfPFm-XuMV5fEg-cD2\r\n\r\n## Expected results\r\n\r\nIt should load the dataset\r\n\r\n## Actual results\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nConnectionError Traceback (most recent call last)\r\n in ()\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 dataset = load_dataset(\"lince\", \"ner_spaeng\")\r\n\r\n10 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1682 ignore_verifications=ignore_verifications,\r\n 1683 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1684 use_auth_token=use_auth_token,\r\n 1685 )\r\n 1686 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 703 if not downloaded_from_gcs:\r\n 704 self._download_and_prepare(\r\n--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 706 )\r\n 707 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1219 \r\n 1220 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1222 \r\n 1223 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 769 split_dict = SplitDict(dataset_name=self.name)\r\n 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 772 \r\n 773 # Checksums verification\r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/lince\/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589\/lince.py in _split_generators(self, dl_manager)\r\n 481 def _split_generators(self, dl_manager):\r\n 482 \"\"\"Returns SplitGenerators.\"\"\"\r\n--> 483 lince_dir = dl_manager.download_and_extract(f\"{_LINCE_URL}\/{self.config.name}.zip\")\r\n 484 data_dir = os.path.join(lince_dir, self.config.data_dir)\r\n 485 return [\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/download\/download_manager.py in download_and_extract(self, url_or_urls)\r\n 429 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 430 \"\"\"\r\n--> 431 return self.extract(self.download(url_or_urls))\r\n 432 \r\n 433 def get_recorded_sizes_checksums(self):\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/download\/download_manager.py in download(self, url_or_urls)\r\n 313 num_proc=download_config.num_proc,\r\n 314 disable_tqdm=not is_progress_bar_enabled(),\r\n--> 315 desc=\"Downloading data files\",\r\n 316 )\r\n 317 duration = datetime.now() - start_time\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)\r\n 346 # Singleton\r\n 347 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 348 return function(data_struct)\r\n 349 \r\n 350 disable_tqdm = disable_tqdm or not logging.is_progress_bar_enabled()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/download\/download_manager.py in _download(self, url_or_filename, download_config)\r\n 333 # append the relative path to the base_path\r\n 334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)\r\n--> 335 return cached_path(url_or_filename, download_config=download_config)\r\n 336 \r\n 337 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 195 use_auth_token=download_config.use_auth_token,\r\n 196 ignore_url_params=download_config.ignore_url_params,\r\n--> 197 download_desc=download_config.download_desc,\r\n 198 )\r\n 199 elif os.path.exists(url_or_filename):\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)\r\n 531 _raise_if_offline_mode_is_enabled(f\"Tried to reach {url}\")\r\n 532 if head_error is not None:\r\n--> 533 raise ConnectionError(f\"Couldn't reach {url} ({repr(head_error)})\")\r\n 534 elif response is not None:\r\n 535 raise ConnectionError(f\"Couldn't reach {url} (error {response.status_code})\")\r\n\r\nConnectionError: Couldn't reach https:\/\/ritual.uh.edu\/lince\/libaccess\/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9\/ner_spaeng.zip (ConnectTimeout(MaxRetryError(\"HTTPSConnectionPool(host='ritual.uh.edu', port=443): Max retries exceeded with url: \/lince\/libaccess\/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9\/ner_spaeng.zip (Caused by ConnectTimeoutError(, 'Connection to ritual.uh.edu timed out. (connect timeout=100)'))\")))\r\n```\r\n\r\n## Environment info\r\n\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4696\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4696\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4695","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4695\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4695\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4695\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4695","id":1307134701,"node_id":"PR_kwDODunzps47hobQ","number":4695,"title":"Add MANtIS dataset","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4695). All of your documentation changes will be reflected on that endpoint."],"created_at":1658073185000,"updated_at":1658073615000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds MANtIS dataset.\r\nArxiv: [https:\/\/arxiv.org\/abs\/1912.04639](https:\/\/arxiv.org\/abs\/1912.04639)\r\nGithub: [https:\/\/github.com\/Guzpenha\/MANtIS](https:\/\/github.com\/Guzpenha\/MANtIS)\r\n\r\nREADME and dataset tags are WIP.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4695\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4695\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4695","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4695","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4695.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4695.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4694","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4694\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4694\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4694\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4694","id":1306958380,"node_id":"I_kwDODunzps5N5pos","number":4694,"title":"Distributed data parallel training for streaming datasets","user":{"login":"cyk1337","id":13767887,"node_id":"MDQ6VXNlcjEzNzY3ODg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13767887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cyk1337","html_url":"https:\/\/github.com\/cyk1337","followers_url":"https:\/\/api.github.com\/users\/cyk1337\/followers","following_url":"https:\/\/api.github.com\/users\/cyk1337\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cyk1337\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cyk1337\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cyk1337\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cyk1337\/orgs","repos_url":"https:\/\/api.github.com\/users\/cyk1337\/repos","events_url":"https:\/\/api.github.com\/users\/cyk1337\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cyk1337\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! According to https:\/\/huggingface.co\/docs\/datasets\/use_with_pytorch#stream-data you can use the pytorch DataLoader with `num_workers>0` to distribute the shards across your workers (it uses `torch.utils.data.get_worker_info()` to get the worker ID and select the right subsets of shards to use)"],"created_at":1658021383000,"updated_at":1658767890000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"### Feature request\r\n\r\nAny documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training? \r\n\r\n### Motivation\r\n\r\nGiven a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation?\r\n\r\n### Your contribution\r\n\r\nDoes it requires manually split on data files for each worker in `DatasetBuilder._split_generator()`? What is`IterableDatasetShard` expected to do?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4694\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4694\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4693","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4693\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4693\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4693\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4693","id":1306788322,"node_id":"PR_kwDODunzps47go-F","number":4693,"title":"update `samsum` script","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4693). All of your documentation changes will be reflected on that endpoint."],"created_at":1657972385000,"updated_at":1658231143000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"update `samsum` script after #4672 was merged (citation is also updated)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4693\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4693\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4693","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4693","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4693.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4693.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4692","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4692\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4692\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4692\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4692","id":1306609680,"node_id":"I_kwDODunzps5N4UgQ","number":4692,"title":"Unable to cast a column with `Image()` by using the `cast_column()` feature","user":{"login":"skrishnan99","id":28833916,"node_id":"MDQ6VXNlcjI4ODMzOTE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28833916?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skrishnan99","html_url":"https:\/\/github.com\/skrishnan99","followers_url":"https:\/\/api.github.com\/users\/skrishnan99\/followers","following_url":"https:\/\/api.github.com\/users\/skrishnan99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skrishnan99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skrishnan99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skrishnan99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skrishnan99\/orgs","repos_url":"https:\/\/api.github.com\/users\/skrishnan99\/repos","events_url":"https:\/\/api.github.com\/users\/skrishnan99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skrishnan99\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, thanks for reporting! A PR (https:\/\/github.com\/huggingface\/datasets\/pull\/4614) has already been opened to address this issue."],"created_at":1657925763000,"updated_at":1658237784000,"closed_at":1658237784000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\nWhen I create a dataset, then add a column to the created dataset through the `dataset.add_column` feature and then try to cast a column of the dataset (this column contains image paths) with `Image()` by using the `cast_column()` feature, I get the following error - ``` TypeError: Couldn't cast array of type\r\nstring\r\nto\r\n{'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ```\r\n\r\nWhen I try and cast the same column, but without doing the `add_column` in the previous step, it works as expected.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import Dataset, Image\r\n\r\n\r\ndata_dict = {\r\n \"img_path\": [\"https:\/\/picsum.photos\/200\/300\"]\r\n}\r\n\r\ndataset = Dataset.from_dict(data_dict)\r\n\r\n#NOTE Comment out this line and use cast_column and it works properly\r\ndataset = dataset.add_column(\"yeet\", [1])\r\n\r\n#NOTE This line fails to execute properly if `add_column` is called before\r\ndataset = dataset.cast_column(\"img_path\", Image())\r\n\r\n# #NOTE This is my current workaround. This seems to work fine with\/without `add_column`. While\r\n# # running this, make sure to comment out the `cast_column` line\r\n# new_features = dataset.features.copy()\r\n# new_features[\"img_path\"] = Image()\r\n# dataset = dataset.cast(new_features)\r\n\r\n\r\nprint(dataset)\r\nprint(dataset.features)\r\nprint(dataset[0])\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\nAble to successfully use `cast_column` to cast a column containing img_paths to now be Image() features after modifying the dataset using `add_column` in a previous step\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/surya\/Desktop\/hf_bug_test.py\", line 14, in \r\n dataset = dataset.cast_column(\"img_path\", Image())\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/fingerprint.py\", line 458, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 1580, in cast_column\r\n dataset._data = dataset._data.cast(dataset.features.arrow_schema)\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1487, in cast\r\n new_tables.append(subtable.cast(subschema, *args, **kwargs))\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 834, in cast\r\n return InMemoryTable(table_cast(self.table, *args, **kwargs))\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1897, in table_cast\r\n return cast_table_to_schema(table, schema)\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1880, in cast_table_to_schema\r\n arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1880, in \r\n arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1673, in wrapper\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1673, in \r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"\/home\/surya\/anaconda3\/envs\/snap_test\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1846, in cast_array_to_feature\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\nTypeError: Couldn't cast array of type\r\nstring\r\nto\r\n{'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Ubuntu 20.04.3 LTS\r\n- Python version: 3.9.7\r\n- PyArrow version: 7.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4692\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4692\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4691","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4691\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4691\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4691\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4691","id":1306389656,"node_id":"I_kwDODunzps5N3eyY","number":4691,"title":"Dataset Viewer issue for rajistics\/indian_food_images","user":{"login":"rajshah4","id":6808012,"node_id":"MDQ6VXNlcjY4MDgwMTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6808012?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rajshah4","html_url":"https:\/\/github.com\/rajshah4","followers_url":"https:\/\/api.github.com\/users\/rajshah4\/followers","following_url":"https:\/\/api.github.com\/users\/rajshah4\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rajshah4\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rajshah4\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rajshah4\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rajshah4\/orgs","repos_url":"https:\/\/api.github.com\/users\/rajshah4\/repos","events_url":"https:\/\/api.github.com\/users\/rajshah4\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rajshah4\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi, thanks for reporting. I triggered a refresh of the preview for this dataset, and it works now. I'm not sure what occurred.\r\n\"Capture\r\n\r\n"],"created_at":1657911795000,"updated_at":1658156523000,"closed_at":1658156523000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/rajistics\/indian_food_images\/viewer\/rajistics--indian_food_images\/train\n\n### Description\n\nI have a train\/test split in my dataset \r\n\"Screen\r\nt\r\nThe dataset viewer works for the test split (images of indian food), but does not show my train split. My guess is maybe there is some corrupt image file that is guessing this. But I have no idea.\r\n\r\nThe original dataset was pulled from here: https:\/\/www.kaggle.com\/datasets\/l33tc0d3r\/indian-food-classification?resource=download-directory\r\n\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4691\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4691\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4690","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4690\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4690\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4690\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4690","id":1306321975,"node_id":"PR_kwDODunzps47fG6w","number":4690,"title":"Refactor base extractors","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657907268000,"updated_at":1658134016000,"closed_at":1658133289000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- Refactors base extractors as subclasses of `BaseExtractor`:\r\n - this is an abstract class defining the interface with:\r\n - `is_extractable`: abstract class method\r\n - `extract`: abstract static method\r\n- Implements abstract `MagicNumberBaseExtractor` (as subclass of `BaseExtractor`): \r\n - this has a default implementation of `is_extractable`\r\n - this improves performance (reducing the number of file reads) by allowing passing already read `magic_number`\r\n- Refactors `Extractor`:\r\n - reads magic number from file only once\r\n\r\nThis PR deprecates:\r\n```python\r\nis_extractable, extractor = self.extractor.is_extractable(input_path, return_extractor=True)\r\nself.extractor.extract(input_path, output_path, extractor=extractor)\r\n```\r\nand uses more Pythonic instead:\r\n```python\r\nextractor_format = self.extractor.infer_extractor_format(input_path)\r\nself.extractor.extract(input_path, output_path, extractor_format)\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4690\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4690\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4690","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4690","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4690.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4690.patch","merged_at":1658133289000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4689","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4689\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4689\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4689\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4689","id":1306230203,"node_id":"PR_kwDODunzps47eyw5","number":4689,"title":"Test extractors for all compression formats","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657902595000,"updated_at":1657907222000,"closed_at":1657906524000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- Adds all compression formats to `test_extractor`\r\n- Tests each base extractor for all compression formats\r\n\r\nNote that all compression formats are tested except \"rar\".","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4689\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4689\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4689","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4689","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4689.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4689.patch","merged_at":1657906524000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4688","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4688\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4688\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4688\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4688","id":1306100488,"node_id":"PR_kwDODunzps47eW6C","number":4688,"title":"Skip test_extractor only for zstd param if zstandard not installed","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657895027000,"updated_at":1657898873000,"closed_at":1657898124000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, if `zstandard` is not installed, `test_extractor` is skipped for all compression format parameters.\r\n\r\nThis PR fixes `test_extractor` so that if `zstandard` is not installed, `test_extractor` is skipped only for the `zstd` compression parameter, that is, it is not skipped for all the other compression parameters (`gzip`, `xz`,...).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4688\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4688\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4688","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4688","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4688.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4688.patch","merged_at":1657898124000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4687","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4687\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4687\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4687\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4687","id":1306021415,"node_id":"PR_kwDODunzps47eF_E","number":4687,"title":"Trigger CI also on push to main","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657890689000,"updated_at":1657892841000,"closed_at":1657892123000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, new CI (on GitHub Actions) is only triggered on pull requests branches when the base branch is main.\r\n\r\nThis PR also triggers the CI when a PR is merged to main branch.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4687\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4687\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4687","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4687","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4687.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4687.patch","merged_at":1657892123000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4686","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4686\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4686\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4686\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4686","id":1305974924,"node_id":"PR_kwDODunzps47d8Jf","number":4686,"title":"Align logging with Transformers (again)","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4686). All of your documentation changes will be reflected on that endpoint.","I wasn't aware of https:\/\/github.com\/huggingface\/datasets\/pull\/1845 before opening this PR. This issue seems much more complex now ..."],"created_at":1657887869000,"updated_at":1657898863000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #2832 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4686\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4686\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4686","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4686","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4686.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4686.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4685","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4685\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4685\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4685\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4685","id":1305861708,"node_id":"PR_kwDODunzps47dju8","number":4685,"title":"Fix mock fsspec","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657880592000,"updated_at":1657890303000,"closed_at":1657889560000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- Removes an unused method from `DummyTestFS`\r\n- Refactors `mock_fsspec` to make it simpler","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4685\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4685\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4685","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4685","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4685.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4685.patch","merged_at":1657889560000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4684","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4684\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4684\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4684\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4684","id":1305554654,"node_id":"I_kwDODunzps5N0S7e","number":4684,"title":"How to assign new values to Dataset?","user":{"login":"beyondguo","id":37113676,"node_id":"MDQ6VXNlcjM3MTEzNjc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37113676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/beyondguo","html_url":"https:\/\/github.com\/beyondguo","followers_url":"https:\/\/api.github.com\/users\/beyondguo\/followers","following_url":"https:\/\/api.github.com\/users\/beyondguo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/beyondguo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/beyondguo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/beyondguo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/beyondguo\/orgs","repos_url":"https:\/\/api.github.com\/users\/beyondguo\/repos","events_url":"https:\/\/api.github.com\/users\/beyondguo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/beyondguo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! One option is use `map` with a function that overwrites the labels (`dset = dset.map(lamba _: {\"label\": 0}, features=dset.features`)). Or you can use the `remove_column` + `add_column` combination (`dset = dset.remove_columns(\"label\").add_column(\"label\", [0]*len(data)).cast(dset.features)`, but note that this approach creates an in-memory table for the added column instead of writing to disk, which could be problematic for large datasets."],"created_at":1657858677000,"updated_at":1657901901000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"![image](https:\/\/user-images.githubusercontent.com\/37113676\/179149159-bbbda0c8-a661-403c-87ed-dc2b4219cd68.png)\r\n\r\nHi, if I want to change some values of the dataset, or add new columns to it, how can I do it?\r\n\r\nFor example, I want to change all the labels of the SST2 dataset to `0`:\r\n```python\r\nfrom datasets import load_dataset\r\ndata = load_dataset('glue','sst2')\r\n\r\ndata['train']['label'] = [0]*len(data)\r\n```\r\n\r\nI will get the error:\r\n```\r\nTypeError: 'Dataset' object does not support item assignment\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4684\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4684\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4683","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4683\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4683\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4683\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4683","id":1305443253,"node_id":"PR_kwDODunzps47cLkm","number":4683,"title":"Update create dataset card docs","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657845689000,"updated_at":1658165160000,"closed_at":1658150650000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR proposes removing the [online dataset card creator](https:\/\/huggingface.co\/datasets\/card-creator\/) in favor of simply copy\/pasting a template and using the [Datasets Tagger app](https:\/\/huggingface.co\/spaces\/huggingface\/datasets-tagging) to generate the tags. The Tagger app provides more guidance by showing all possible values a user can select in the dropdown menus, whereas the online dataset card creator doesn't, which can make it difficult to know what tag values to input.\r\n\r\nLet me know what you think! :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4683\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4683\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4683","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4683","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4683.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4683.patch","merged_at":1658150650000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4682","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4682\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4682\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4682\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4682","id":1304788215,"node_id":"I_kwDODunzps5NxXz3","number":4682,"title":"weird issue\/bug with columns (dataset iterable\/stream mode)","user":{"login":"eunseojo","id":12104720,"node_id":"MDQ6VXNlcjEyMTA0NzIw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12104720?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eunseojo","html_url":"https:\/\/github.com\/eunseojo","followers_url":"https:\/\/api.github.com\/users\/eunseojo\/followers","following_url":"https:\/\/api.github.com\/users\/eunseojo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eunseojo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eunseojo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eunseojo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eunseojo\/orgs","repos_url":"https:\/\/api.github.com\/users\/eunseojo\/repos","events_url":"https:\/\/api.github.com\/users\/eunseojo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eunseojo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1657805207000,"updated_at":1657805207000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"I have a dataset online (CloverSearch\/cc-news-mutlilingual) that has a bunch of columns, two of which are \"score_title_maintext\" and \"score_title_description\". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all \"score_title_description\" values, but I kept getting key not found after a certain point of iteration. I found that some json objects in the file don't have \"score_title_description\". And in SOME cases, this returns a NONE and in others it just gets a key error. Why is there an inconsistency here and how can I fix it?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4682\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4682\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4681","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4681\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4681\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4681\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4681","id":1304617484,"node_id":"I_kwDODunzps5NwuIM","number":4681,"title":"IndexError when loading ImageFolder","user":{"login":"johko","id":2843485,"node_id":"MDQ6VXNlcjI4NDM0ODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2843485?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/johko","html_url":"https:\/\/github.com\/johko","followers_url":"https:\/\/api.github.com\/users\/johko\/followers","following_url":"https:\/\/api.github.com\/users\/johko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/johko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/johko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/johko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/johko\/orgs","repos_url":"https:\/\/api.github.com\/users\/johko\/repos","events_url":"https:\/\/api.github.com\/users\/johko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/johko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi, thanks for reporting! If there are no examples in ImageFolder, the `label` column is of type `ClassLabel(names=[])`, which leads to an error in [this line](https:\/\/github.com\/huggingface\/datasets\/blob\/c15b391942764152f6060b59921b09cacc5f22a6\/src\/datasets\/arrow_writer.py#L387) as `asdict(info)` calls `Features({..., \"label\": {'num_classes': 0, 'names': [], 'id': None, '_type': 'ClassLabel'}})`, which then calls `require_decoding` [here](https:\/\/github.com\/huggingface\/datasets\/blob\/c15b391942764152f6060b59921b09cacc5f22a6\/src\/datasets\/features\/features.py#L1516) on the dict value it does not expect.\r\n\r\nI see two ways to fix this:\r\n* custom `asdict` where `dict_factory` is also applied on the `dict` object itself besides dataclasses (the built-in implementation calls `type(dict_obj)` - this means we also need to fix `Features.to_dict` btw) \r\n* implement `DatasetInfo.to_dict` (though adding `to_dict` to a data class is a bit weird IMO)\r\n\r\n@lhoestq Which one of these approaches do you like more?\r\n","Small pref for the first option, it feels weird to know that `Features()` can be called with a dictionary of types defined as dictionaries instead of type instances."],"created_at":1657796275000,"updated_at":1658752674000,"closed_at":1658752674000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nLoading an image dataset with `imagefolder` throws `IndexError: list index out of range` when the given folder contains a non-image file (like a csv).\r\n\r\n## Steps to reproduce the bug\r\nPut a csv file in a folder with images and load it:\r\n```python\r\nimport datasets\r\n datasets.load_dataset(\"imagefolder\", data_dir=path\/to\/folder)\r\n```\r\n\r\n## Expected results\r\nI would expect a better error message, like `Unsupported file` or even the dataset loader just ignoring every file that is not an image in that case.\r\n\r\n## Actual results\r\nHere is the whole traceback:\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.11.0-051100-generic-x86_64-with-glibc2.27\r\n- Python version: 3.9.9\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4681\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4681\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4680","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4680\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4680\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4680\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4680","id":1304534770,"node_id":"I_kwDODunzps5NwZ7y","number":4680,"title":"Dataset Viewer issue for codeparrot\/xlcost-text-to-code","user":{"login":"loubnabnl","id":44069155,"node_id":"MDQ6VXNlcjQ0MDY5MTU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44069155?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/loubnabnl","html_url":"https:\/\/github.com\/loubnabnl","followers_url":"https:\/\/api.github.com\/users\/loubnabnl\/followers","following_url":"https:\/\/api.github.com\/users\/loubnabnl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/loubnabnl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/loubnabnl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/loubnabnl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/loubnabnl\/orgs","repos_url":"https:\/\/api.github.com\/users\/loubnabnl\/repos","events_url":"https:\/\/api.github.com\/users\/loubnabnl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/loubnabnl\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["There seems to be an issue with the `C++-snippet-level` config:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"codeparrot\/xlcost-text-to-code\", \"C++-snippet-level\")\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 352, in get_dataset_config_info\r\n info.splits = {\r\nTypeError: 'NoneType' object is not iterable\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nI remove the dataset-viewer tag since it's not directly related.\r\n\r\nPinging @huggingface\/datasets ","Thanks I found that this subset wasn't properly defined the the config, I fixed it. Now I can see the subsets but I get this error for the viewer\r\n````\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The split cache is empty.\r\n```","Yes, the cache is being refreshed, hopefully, it will work in some minutes for all the splits. Some are already here:\r\n\r\nhttps:\/\/huggingface.co\/datasets\/codeparrot\/xlcost-text-to-code\/viewer\/Python-snippet-level\/train\r\n\r\n\"Capture\r\n","I think all the splits are working as expected now","Perfect, thank you!"],"created_at":1657791950000,"updated_at":1658162220000,"closed_at":1658160276000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/codeparrot\/xlcost-text-to-code\n\n### Description\n\nError\r\n```\r\nServer Error\r\nStatus code: 400\r\nException: TypeError\r\nMessage: 'NoneType' object is not iterable\r\n```\r\nBefore I did a minor change in the dataset script (removing some comments), the viewer was working but not properely, it wasn't showing the dataset subsets. But the data can be loaded successfully.\r\n\r\nThanks!\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4680\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4680\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4679","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4679\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4679\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4679\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4679","id":1303980648,"node_id":"PR_kwDODunzps47XX67","number":4679,"title":"Added method to remove excess nesting in a DatasetDict","user":{"login":"CakeCrusher","id":37946988,"node_id":"MDQ6VXNlcjM3OTQ2OTg4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37946988?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/CakeCrusher","html_url":"https:\/\/github.com\/CakeCrusher","followers_url":"https:\/\/api.github.com\/users\/CakeCrusher\/followers","following_url":"https:\/\/api.github.com\/users\/CakeCrusher\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/CakeCrusher\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/CakeCrusher\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/CakeCrusher\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/CakeCrusher\/orgs","repos_url":"https:\/\/api.github.com\/users\/CakeCrusher\/repos","events_url":"https:\/\/api.github.com\/users\/CakeCrusher\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/CakeCrusher\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I think the issue you linked is closed and suggests to use `remove_columns`.\r\n\r\nMoreover if you end up with a dataset with an unnecessarily nested data, please modify your processing functions to not output nested data, or use `map(..., batched=True)` if you function take batches as input","Hi @lhoestq , you are right about the issues this pull has steered beyond that issue. I created this [colab notebook](https:\/\/colab.research.google.com\/drive\/16aLu6QrDSV_aUYRdpufl5E4iS08qkUGj?usp=sharing) to present the error. I tried using batch and that won't resolve it either. I'm looking into that error right now.","I think you just need to pass one example at a time to your tokenizer, this way you don't end up with nested data:\r\n```python\r\n\r\ndef preprocessFunction(row):\r\n collatedContext = tokenizer.eos_token.join([row[\"context\"+str(i+1)] for i in range(int(AMT_OF_CONTEXT))])\r\n response = row[\"response\"]\r\n tokenizedContext = tokenizer(\r\n collatedContext, max_length=max_context_length, truncation=True # don't pass as a list here\r\n )\r\n with tokenizer.as_target_tokenizer():\r\n tokenized_response = tokenizer(\r\n response, max_length=max_response_length, truncation=True # don't pass a a list here\r\n )\r\n tokenizedContext[\"labels\"] = tokenized_response[\"input_ids\"]\r\n return tokenizedContext\r\n```","Yes that is correct, the purpose of this pull is to advise of a more general solution like with `def remove_excess_nesting(self)` or maybe automate the solution (stas00 advised not to automate it as it could \"not be backwards compatible\").","I'm not sure I understand how having `remove_excess_nesting` would make more sense than just fixing the preprocessFunction to simply not return nested samples, can you elaborate ?","Figuring out the issue can be a bit difficult to figure out. Only until I added batch does it make a little more sense with the error\r\n\r\n> sequence item 0: expected str instance, list found\r\n\r\nbut batch was never intended.\r\n\r\nWhen you run the colab you will notice that only until collating do you learn there is this error. So i figured it would be better to address it during at the `DatasetDict` level.\r\nI think it would be ideal if the user could be notified at the preprocess function.","I'm not arguing that `remove_excess_nesting` is the right solution but what I aim to address is dealing with unnecessary nesting as early as possible.","> When you run the colab you will notice that only until collating do you learn there is this error.\r\n\r\nI think users can just check the `dataset.features` and they would notice that the data are nested\r\n```python\r\n{\r\n 'input_ids': Sequence(Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), length=-1, id=None)\r\n ...\r\n}\r\n```\r\n\r\nSometime nested data are intentional, so you can't know in advance if it's a user's mistake or something planned.","Yes, I understand, it could be intentional and only the collator has problems with it. So, it is not worth handling it any differently in any other non-erroneous data. \r\n\r\nThat being said do you think there is any use for the `remove_excess_nesting` method? Or maybe it should be applied in a different way? If not feel free to close this PR. ","I think users can write it and use `map` themselves if needed, it is pretty straightforward to implement.\r\n\r\nI'm closing this PR if you don't mind, and thank you for the discussion :)","No problem @lhoestq , thanks for walking me through it."],"created_at":1657748977000,"updated_at":1658418926000,"closed_at":1658400902000,"author_association":"NONE","active_lock_reason":null,"body":"Added the ability for a DatasetDict to remove additional nested layers within its features to avoid conflicts when collating. It is meant to accompany [this PR](https:\/\/github.com\/huggingface\/transformers\/pull\/18119) to resolve the same issue [#15505](https:\/\/github.com\/huggingface\/transformers\/issues\/15505).\r\n\r\n@stas00 @lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4679\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4679\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4679","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4679","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4679.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4679.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4678","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4678\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4678\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4678\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4678","id":1303741432,"node_id":"I_kwDODunzps5NtYP4","number":4678,"title":"Cant pass streaming dataset to dataloader after take()","user":{"login":"zankner","id":39166683,"node_id":"MDQ6VXNlcjM5MTY2Njgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39166683?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zankner","html_url":"https:\/\/github.com\/zankner","followers_url":"https:\/\/api.github.com\/users\/zankner\/followers","following_url":"https:\/\/api.github.com\/users\/zankner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zankner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zankner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zankner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zankner\/orgs","repos_url":"https:\/\/api.github.com\/users\/zankner\/repos","events_url":"https:\/\/api.github.com\/users\/zankner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zankner\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Calling `take` on an iterable\/streamable dataset makes it not possible to shard the dataset, which in turn disables multi-process loading (attempts to split the workload over the shards), so to go past this limitation, you can either use single-process loading in `DataLoader` (`num_workers=None`) or fetch the first `50_000\/batch_size` batches in the loop."],"created_at":1657733658000,"updated_at":1657804041000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI am trying to pass a streaming version of c4 to a dataloader, but it can't be passed after I call `dataset.take(n)`. Some functions such as `shuffle()` can be applied without breaking the dataloader but not take.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\nimport torch\r\n\r\ndset = datasets.load_dataset(path='c4', name='en', split=\"train\", streaming=True)\r\ndset = dset.take(50_000)\r\ndset = dset.with_format(\"torch\")\r\n\r\nnum_workers = 8\r\nbatch_size = 512\r\n\r\nloader = torch.utils.data.DataLoader(dataset=dset,\r\n batch_size=batch_size,\r\n num_workers=num_workers)\r\nfor batch in loader:\r\n ...\r\n```\r\n\r\n## Expected results\r\nNo error thrown when iterating over the dataloader\r\n\r\n## Actual results\r\nOriginal Traceback (most recent call last):\r\n File \"\/usr\/local\/lib\/python3.9\/dist-packages\/torch\/utils\/data\/_utils\/worker.py\", line 287, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"\/usr\/local\/lib\/python3.9\/dist-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 32, in fetch\r\n data.append(next(self.dataset_iter))\r\n File \"\/root\/.local\/lib\/python3.9\/site-packages\/datasets\/formatting\/dataset_wrappers\/torch_iterable_dataset.py\", line 48, in __iter__\r\n for key, example in self._iter_shard(shard_idx):\r\n File \"\/root\/.local\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 586, in _iter_shard\r\n yield from ex_iterable.shard_data_sources(shard_idx)\r\n File \"\/root\/.local\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 60, in shard_data_sources\r\n raise NotImplementedError(f\"{type(self)} doesn't implement shard_data_sources yet\")\r\nNotImplementedError: doesn't implement shard_data_sources yet\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.13\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4678\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":1},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4678\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4677","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4677\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4677\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4677\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4677","id":1302258440,"node_id":"I_kwDODunzps5NnuMI","number":4677,"title":"Random 400 Client Error when pushing dataset","user":{"login":"msis","id":577139,"node_id":"MDQ6VXNlcjU3NzEzOQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/577139?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/msis","html_url":"https:\/\/github.com\/msis","followers_url":"https:\/\/api.github.com\/users\/msis\/followers","following_url":"https:\/\/api.github.com\/users\/msis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/msis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/msis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/msis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/msis\/orgs","repos_url":"https:\/\/api.github.com\/users\/msis\/repos","events_url":"https:\/\/api.github.com\/users\/msis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/msis\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1657641404000,"updated_at":1657641404000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen pushing a dataset, the client errors randomly with `Bad Request for url:...`.\r\nAt the next call, a new parquet file is created for each shard.\r\nThe client may fail at any random shard.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndataset.push_to_hub(\"ORG\/DATASET\", private=True, branch=\"main\")\r\n```\r\n\r\n## Expected results\r\nPush all the dataset to the Hub with no duplicates. \r\nIf it fails, it should retry or fail, but continue from the last failed shard.\r\n\r\n## Actual results\r\n```\r\n---------------------------------------------------------------------------\r\nHTTPError Traceback (most recent call last)\r\ntesting.ipynb Cell 29 in ()\r\n----> [1](testing.ipynb?line=0) dataset.push_to_hub(\"ORG\/DATASET\", private=True, branch=\"main\")\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py:4297, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, max_shard_size, shard_size, embed_external_files)\r\n 4291 warnings.warn(\r\n 4292 \"'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.\",\r\n 4293 FutureWarning,\r\n 4294 )\r\n 4295 max_shard_size = shard_size\r\n-> 4297 repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub(\r\n 4298 repo_id=repo_id,\r\n 4299 split=split,\r\n 4300 private=private,\r\n 4301 token=token,\r\n 4302 branch=branch,\r\n 4303 max_shard_size=max_shard_size,\r\n 4304 embed_external_files=embed_external_files,\r\n 4305 )\r\n 4306 organization, dataset_name = repo_id.split(\"\/\")\r\n 4307 info_to_dump = self.info.copy()\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py:4195, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)\r\n 4193 shard.to_parquet(buffer)\r\n 4194 uploaded_size += buffer.tell()\r\n-> 4195 _retry(\r\n 4196 api.upload_file,\r\n 4197 func_kwargs=dict(\r\n 4198 path_or_fileobj=buffer.getvalue(),\r\n 4199 path_in_repo=shard_path_in_repo,\r\n 4200 repo_id=repo_id,\r\n 4201 token=token,\r\n 4202 repo_type=\"dataset\",\r\n 4203 revision=branch,\r\n 4204 identical_ok=False,\r\n 4205 ),\r\n 4206 exceptions=HTTPError,\r\n 4207 status_codes=[504],\r\n 4208 base_wait_time=2.0,\r\n 4209 max_retries=5,\r\n 4210 max_wait_time=20.0,\r\n 4211 )\r\n 4212 shards_path_in_repo.append(shard_path_in_repo)\r\n 4214 # Cleanup to remove unused files\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py:284, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 282 except exceptions as err:\r\n 283 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n--> 284 raise err\r\n 285 else:\r\n 286 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py:281, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 279 while True:\r\n 280 try:\r\n--> 281 return func(*func_args, **func_kwargs)\r\n 282 except exceptions as err:\r\n 283 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/huggingface_hub\/hf_api.py:1967, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, identical_ok, commit_message, commit_description, create_pr)\r\n 1957 commit_message = (\r\n 1958 commit_message\r\n 1959 if commit_message is not None\r\n 1960 else f\"Upload {path_in_repo} with huggingface_hub\"\r\n 1961 )\r\n 1962 operation = CommitOperationAdd(\r\n 1963 path_or_fileobj=path_or_fileobj,\r\n 1964 path_in_repo=path_in_repo,\r\n 1965 )\r\n-> 1967 pr_url = self.create_commit(\r\n 1968 repo_id=repo_id,\r\n 1969 repo_type=repo_type,\r\n 1970 operations=[operation],\r\n 1971 commit_message=commit_message,\r\n 1972 commit_description=commit_description,\r\n 1973 token=token,\r\n 1974 revision=revision,\r\n 1975 create_pr=create_pr,\r\n 1976 )\r\n 1977 if pr_url is not None:\r\n 1978 re_match = re.match(REGEX_DISCUSSION_URL, pr_url)\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/huggingface_hub\/hf_api.py:1844, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads)\r\n 1836 commit_url = f\"{self.endpoint}\/api\/{repo_type}s\/{repo_id}\/commit\/{revision}\"\r\n 1838 commit_resp = requests.post(\r\n 1839 url=commit_url,\r\n 1840 headers={\"Authorization\": f\"Bearer {token}\"},\r\n 1841 json=commit_payload,\r\n 1842 params={\"create_pr\": 1} if create_pr else None,\r\n 1843 )\r\n-> 1844 _raise_for_status(commit_resp)\r\n 1845 return commit_resp.json().get(\"pullRequestUrl\", None)\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/huggingface_hub\/utils\/_errors.py:84, in _raise_for_status(request)\r\n 76 if request.status_code == 401:\r\n 77 # The repo was not found and the user is not Authenticated\r\n 78 raise RepositoryNotFoundError(\r\n 79 f\"401 Client Error: Repository Not Found for url: {request.url}. If the\"\r\n 80 \" repo is private, make sure you are authenticated. (Request ID:\"\r\n 81 f\" {request_id})\"\r\n 82 )\r\n---> 84 _raise_with_request_id(request)\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/huggingface_hub\/utils\/_errors.py:95, in _raise_with_request_id(request)\r\n 92 if request_id is not None and len(e.args) > 0 and isinstance(e.args[0], str):\r\n 93 e.args = (e.args[0] + f\" (Request ID: {request_id})\",) + e.args[1:]\r\n---> 95 raise e\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/huggingface_hub\/utils\/_errors.py:90, in _raise_with_request_id(request)\r\n 88 request_id = request.headers.get(\"X-Request-Id\")\r\n 89 try:\r\n---> 90 request.raise_for_status()\r\n 91 except Exception as e:\r\n 92 if request_id is not None and len(e.args) > 0 and isinstance(e.args[0], str):\r\n\r\nFile ~\/.local\/lib\/python3.9\/site-packages\/requests\/models.py:1021, in Response.raise_for_status(self)\r\n 1016 http_error_msg = (\r\n 1017 f\"{self.status_code} Server Error: {reason} for url: {self.url}\"\r\n 1018 )\r\n 1020 if http_error_msg:\r\n-> 1021 raise HTTPError(http_error_msg, response=self)\r\n\r\nHTTPError: 400 Client Error: Bad Request for url: https:\/\/huggingface.co\/api\/datasets\/ORG\/DATASET\/commit\/main (Request ID: a_F0IQAHJdxGKVRYyu1cF)\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.31\r\n- Python version: 3.9.4\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4677\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4677\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4676","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4676\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4676\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4676\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4676","id":1302202028,"node_id":"I_kwDODunzps5Nngas","number":4676,"title":"Dataset.map gets stuck on _cast_to_python_objects","user":{"login":"srobertjames","id":662612,"node_id":"MDQ6VXNlcjY2MjYxMg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/662612?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/srobertjames","html_url":"https:\/\/github.com\/srobertjames","followers_url":"https:\/\/api.github.com\/users\/srobertjames\/followers","following_url":"https:\/\/api.github.com\/users\/srobertjames\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/srobertjames\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/srobertjames\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/srobertjames\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/srobertjames\/orgs","repos_url":"https:\/\/api.github.com\/users\/srobertjames\/repos","events_url":"https:\/\/api.github.com\/users\/srobertjames\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/srobertjames\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Are you able to reproduce this? My example is small enough that it should be easy to try.","Hi! Thanks for reporting and providing a reproducible example. Indeed, by default, `datasets` performs an expensive cast on the values returned by `map` to convert them to one of the types supported by PyArrow (the underlying storage format used by `datasets`). This cast is not needed on NumPy arrays as PyArrow supports them natively, so one way to make this transform faster is to add `return_tensors=\"np\"` to the tokenizer call. \r\n\r\nI think we should mention this in the docs (cc @stevhliu)","I tested this tokenize function and indeed noticed a casting. However it seems to only concerns the `offset_mapping` field, which contains a list of tuples, that is converted to a list of lists. Since `pyarrow` also supports tuples, we actually don't need to convert the tuples to lists. \r\n\r\nI think this can be changed here: \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/ede72d3f9796339701ec59899c7c31d2427046fb\/src\/datasets\/features\/features.py#L382-L383\r\n\r\n```diff\r\n- if isinstance(obj, list): \r\n+ if isinstance(obj, (list, tuple)): \r\n```\r\n\r\nand here: \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/ede72d3f9796339701ec59899c7c31d2427046fb\/src\/datasets\/features\/features.py#L386-L387\r\n\r\n```diff\r\n- return obj if isinstance(obj, list) else [], isinstance(obj, tuple)\r\n+ return obj, False\r\n```\r\n\r\n@srobertjames can you try applying these changes and let us know if it helps ? If so, feel free to open a Pull Request to contribute this improvement if you want :)","Wow, adding `return_tensors=\"np\"` sped up my example by a **factor 17x** of and completely eliminated the casting! I'd recommend not only to document it, but to make that the default.\r\n\r\nThe code at https:\/\/github.com\/huggingface\/notebooks\/blob\/main\/examples\/question_answering.ipynb does not specify `return_tensors=\"np\"` but yet avoids the casting penalty. How does it do that? (The ntbk seems to do `return_overflowing_tokens=True, return_offsets_mapping=True,`).\r\n\r\nAlso, surprisingly enough, using `return_tensors=\"pt\"` (which is my eventual application) yields this error:\r\n```\r\nTypeError: Provided `function` which is applied to all elements of table returns a `dict` of types \r\n[, , , ]. \r\nWhen using `batched=True`, make sure provided `function` returns a `dict` of types like \r\n`(, )`.\r\n```","Setting the output to `\"np\"` makes the whole pipeline fast because it moves the data buffers from rust to python to arrow using zero-copy, and also because it does eliminate the casting completely ;)\r\n\r\nHave you had a chance to try eliminating the tuple casting using the trick above ?","@lhoestq I just benchmarked the two edits to `features.py` above, and they appear to solve the problem, bringing my original example to within 20% the speed of the output `\"np\"` example. Nice!\r\n\r\nFor a pull request, do you suggest simply following https:\/\/github.com\/huggingface\/datasets\/blob\/main\/CONTRIBUTING.md ?","Cool ! Sure feel free to follow these instructions to open a PR :) thanks !"],"created_at":1657638598000,"updated_at":1660228020000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n`Dataset.map`, when fed a Huggingface Tokenizer as its map func, can sometimes spend huge amounts of time doing casts. A minimal example follows.\r\n\r\nNot all usages suffer from this. For example, I profiled the preprocessor at https:\/\/github.com\/huggingface\/notebooks\/blob\/main\/examples\/question_answering.ipynb , and it did _not_ have this problem. However, I'm at a loss to figure out how it avoids it, as the example below is simple and minimal and still has this problem.\r\n\r\nThis casting, where it occurs, causes the `Dataset.map` to run approximately 7x slower than it runs for code which does not cause this casting.\r\n\r\nThis may be related to https:\/\/github.com\/huggingface\/datasets\/issues\/1046 . However, the tokenizer is _not_ set to return Tensors.\r\n\r\n## Steps to reproduce the bug\r\nA minimal, self-contained example to reproduce is below:\r\n```python\r\nimport transformers\r\nfrom transformers import AutoTokenizer\r\nfrom datasets import load_dataset\r\nimport torch\r\nimport cProfile\r\n\r\npretrained = 'distilbert-base-uncased'\r\ntokenizer = AutoTokenizer.from_pretrained(pretrained)\r\n\r\nsquad = load_dataset('squad')\r\nsquad_train = squad['train']\r\nsquad_tiny = squad_train.select(range(5000))\r\n\r\nassert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)\r\n\r\ndef tokenize(ds):\r\n tokens = tokenizer(text=ds['question'],\r\n text_pair=ds['context'],\r\n add_special_tokens=True,\r\n padding='max_length',\r\n truncation='only_second',\r\n max_length=160,\r\n stride=32,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n )\r\n return tokens\r\n\r\ncmd = 'squad_tiny.map(tokenize, batched=True, remove_columns=squad_tiny.column_names)'\r\ncProfile.run(cmd, sort='tottime')\r\n```\r\n\r\n## Actual results\r\nThe code works, but takes 10-25 sec per batch (about 7x slower than non-casting code), with the following profile. Note that `_cast_to_python_objects` is the culprit.\r\n```\r\n 63524075 function calls (58206482 primitive calls) in 121.836 seconds\r\n\r\n Ordered by: internal time\r\n\r\n ncalls tottime percall cumtime percall filename:lineno(function)\r\n5274034\/40 68.751 0.000 111.060 2.776 features.py:262(_cast_to_python_objects)\r\n 42223832 24.077 0.000 33.310 0.000 {built-in method builtins.isinstance}\r\n 16338\/20 5.121 0.000 111.053 5.553 features.py:361()\r\n 5274135 4.747 0.000 4.749 0.000 {built-in method _abc._abc_instancecheck}\r\n 80\/40 4.731 0.059 116.292 2.907 {pyarrow.lib.array}\r\n 5274135 4.485 0.000 9.234 0.000 abc.py:96(__instancecheck__)\r\n2661564\/2645196 2.959 0.000 4.298 0.000 features.py:1081(_check_non_null_non_empty_recursive)\r\n 5 2.786 0.557 2.786 0.557 {method 'encode_batch' of 'tokenizers.Tokenizer' objects}\r\n 2668052 0.930 0.000 0.930 0.000 {built-in method builtins.len}\r\n 5000 0.930 0.000 0.938 0.000 tokenization_utils_fast.py:187(_convert_encoding)\r\n 5 0.750 0.150 0.808 0.162 {method 'to_pydict' of 'pyarrow.lib.Table' objects}\r\n 1 0.444 0.444 121.749 121.749 arrow_dataset.py:2501(_map_single)\r\n 40 0.375 0.009 116.291 2.907 arrow_writer.py:151(__arrow_array__)\r\n 10 0.066 0.007 0.066 0.007 {method 'write_batch' of 'pyarrow.lib._CRecordBatchWriter' objects}\r\n 1 0.060 0.060 121.835 121.835 fingerprint.py:409(wrapper)\r\n11387\/5715 0.049 0.000 0.175 0.000 {built-in method builtins.getattr}\r\n 36 0.049 0.001 0.049 0.001 {pyarrow._compute.call_function}\r\n 15000 0.040 0.000 0.040 0.000 _collections_abc.py:719(__iter__)\r\n 3 0.023 0.008 0.023 0.008 {built-in method _imp.create_dynamic}\r\n 77 0.020 0.000 0.020 0.000 {built-in method builtins.dir}\r\n 37 0.019 0.001 0.019 0.001 socket.py:543(send)\r\n 15 0.017 0.001 0.017 0.001 tokenization_utils_fast.py:460()\r\n 432\/421 0.015 0.000 0.024 0.000 traitlets.py:1388(_notify_observers)\r\n 5000 0.015 0.000 0.018 0.000 _collections_abc.py:672(keys)\r\n 51 0.014 0.000 0.042 0.001 traitlets.py:276(getmembers)\r\n 5 0.014 0.003 3.775 0.755 tokenization_utils_fast.py:392(_batch_encode_plus)\r\n 3\/1 0.014 0.005 0.035 0.035 {built-in method _imp.exec_dynamic}\r\n 5 0.012 0.002 0.950 0.190 tokenization_utils_fast.py:438()\r\n 31626 0.012 0.000 0.012 0.000 {method 'append' of 'list' objects}\r\n1532\/1001 0.011 0.000 0.189 0.000 traitlets.py:643(get)\r\n 5 0.009 0.002 3.796 0.759 arrow_dataset.py:2631(apply_function_on_filtered_inputs)\r\n 51 0.009 0.000 0.062 0.001 traitlets.py:1766(traits)\r\n 5 0.008 0.002 3.784 0.757 tokenization_utils_base.py:2632(batch_encode_plus)\r\n 368 0.007 0.000 0.044 0.000 traitlets.py:1715(_get_trait_default_generator)\r\n 26 0.007 0.000 0.022 0.001 traitlets.py:1186(setup_instance)\r\n 51 0.006 0.000 0.010 0.000 traitlets.py:1781()\r\n 80\/32 0.006 0.000 0.052 0.002 table.py:1758(cast_array_to_feature)\r\n 684 0.006 0.000 0.007 0.000 {method 'items' of 'dict' objects}\r\n4344\/1794 0.006 0.000 0.192 0.000 traitlets.py:675(__get__)\r\n...\r\n```\r\n## Environment info\r\nI observed this on both Google colab and my local workstation:\r\n\r\n### Google colab\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n\r\n### Local\r\n- `datasets` version: 2.3.2\r\n- Platform: Windows-7-6.1.7601-SP1\r\n- Python version: 3.8.10\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4676\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4676\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4675","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4675\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4675\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4675\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4675","id":1302193649,"node_id":"I_kwDODunzps5NneXx","number":4675,"title":"Unable to use dataset with PyTorch dataloader","user":{"login":"BlueskyFR","id":25421460,"node_id":"MDQ6VXNlcjI1NDIxNDYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25421460?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BlueskyFR","html_url":"https:\/\/github.com\/BlueskyFR","followers_url":"https:\/\/api.github.com\/users\/BlueskyFR\/followers","following_url":"https:\/\/api.github.com\/users\/BlueskyFR\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BlueskyFR\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BlueskyFR\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BlueskyFR\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BlueskyFR\/orgs","repos_url":"https:\/\/api.github.com\/users\/BlueskyFR\/repos","events_url":"https:\/\/api.github.com\/users\/BlueskyFR\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BlueskyFR\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! `para_crawl` has a single column of type `Translation`, which stores translation dictionaries. These dictionaries can be stored in a NumPy array but not in a PyTorch tensor since PyTorch only supports numeric types. In `datasets`, the conversion to `torch` works as follows: \r\n1. convert PyArrow table to NumPy arrays \r\n2. convert NumPy arrays to Torch tensors. \r\n\r\nThe 2nd step is problematic for your case as `datasets` attempts to convert the array of dictionaries to a PyTorch tensor. One way to fix this is to use the [preprocessing logic](https:\/\/github.com\/huggingface\/transformers\/blob\/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e\/examples\/pytorch\/translation\/run_translation.py#L440-L458) from the Transformers translation script. And on our side, I think we can replace a NumPy array of dicts with a dict of NumPy array if the feature type is `Translation`\/`TranslationVariableLanguages` (one array for each language) to get the official PyTorch error message for strings in such case."],"created_at":1657638244000,"updated_at":1657808266000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nWhen using `.with_format(\"torch\")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(\r\n \"para_crawl\",\r\n name=\"enfr\",\r\n cache_dir=\"\/tmp\/test\/\",\r\n split=\"train\",\r\n keep_in_memory=True,\r\n)\r\n\r\ndataloader = DataLoader(ds.with_format(\"torch\"), num_workers=32)\r\nprint(next(iter(dataloader)))\r\n```\r\n\r\nIs there something I am doing wrong? The documentation does not say much about the behavior of `.with_format()` so I feel like I am a bit stuck here :-\/\r\n\r\nThanks in advance for your help!\r\n\r\n## Expected results\r\n\r\nThe code should run with no error\r\n\r\n## Actual results\r\n\r\n```\r\nAttributeError: 'str' object has no attribute 'dtype'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28\r\n- Python version: 3.10.4\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4675\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4675\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4674","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4674\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4674\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4674\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4674","id":1301294844,"node_id":"I_kwDODunzps5NkC78","number":4674,"title":"Issue loading datasets -- pyarrow.lib has no attribute","user":{"login":"margotwagner","id":39107794,"node_id":"MDQ6VXNlcjM5MTA3Nzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39107794?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/margotwagner","html_url":"https:\/\/github.com\/margotwagner","followers_url":"https:\/\/api.github.com\/users\/margotwagner\/followers","following_url":"https:\/\/api.github.com\/users\/margotwagner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/margotwagner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/margotwagner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/margotwagner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/margotwagner\/orgs","repos_url":"https:\/\/api.github.com\/users\/margotwagner\/repos","events_url":"https:\/\/api.github.com\/users\/margotwagner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/margotwagner\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @margotwagner, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug: in an environment with datasets-2.3.2 and pyarrow-8.0.0, I can load the datasets without any problem:\r\n```python\r\n>>> ds = load_dataset(\"glue\", \"cola\")\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n\r\n>>> import pyarrow\r\n>>> pyarrow.__version__\r\n8.0.0\r\n>>> from pyarrow.lib import IpcReadOptions\r\n>>> IpcReadOptions\r\npyarrow.lib.IpcReadOptions\r\n```\r\n\r\nI think you may have a problem in your Python environment: maybe you have also an old version of pyarrow that has precedence when importing it.\r\n\r\nCould you please check this (just after you tried to load the dataset and got the error)?\r\n```python\r\n>>> import pyarrow\r\n>>> pyarrow.__version__\r\n``` "],"created_at":1657577444000,"updated_at":1657601671000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI am trying to load sentiment analysis datasets from huggingface, but any dataset I try to use via load_dataset, I get the same error:\r\n`AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'`\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndataset = load_dataset(\"glue\", \"cola\")\r\n```\r\n\r\n## Expected results\r\nDownload datasets without issue.\r\n\r\n## Actual results\r\n`AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'`\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n- Python version: 3.8.5\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.1.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4674\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4674\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4673","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4673\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4673\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4673\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4673","id":1301010331,"node_id":"I_kwDODunzps5Ni9eb","number":4673,"title":"load_datasets on csv returns everything as a string ","user":{"login":"courtneysprouse","id":25102613,"node_id":"MDQ6VXNlcjI1MTAyNjEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25102613?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/courtneysprouse","html_url":"https:\/\/github.com\/courtneysprouse","followers_url":"https:\/\/api.github.com\/users\/courtneysprouse\/followers","following_url":"https:\/\/api.github.com\/users\/courtneysprouse\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/courtneysprouse\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/courtneysprouse\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/courtneysprouse\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/courtneysprouse\/orgs","repos_url":"https:\/\/api.github.com\/users\/courtneysprouse\/repos","events_url":"https:\/\/api.github.com\/users\/courtneysprouse\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/courtneysprouse\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @courtneysprouse, thanks for reporting.\r\n\r\nYes, you are right: by default the \"csv\" loader loads all columns as strings. \r\n\r\nYou could tweak this behavior by passing the `feature` argument to `load_dataset`, but it is also true that currently it is not possible to perform some kind of casts, due to lacking of implementation in PyArrow. For example:\r\n```python\r\nimport datasets\r\n\r\nfeatures = datasets.Features(\r\n {\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"ner_tags\": datasets.Sequence(datasets.Value(\"int32\")),\r\n }\r\n)\r\n\r\nnew_conll = datasets.load_dataset(\"csv\", data_files=\"ner_conll.csv\", features=features)\r\n```\r\ngives `ArrowNotImplementedError` error:\r\n```\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: Unsupported cast from string to list using function cast_list\r\n```\r\n\r\nOn the other hand, if you just would like to save and afterwards load your dataset, you could use `save_to_disk` and `load_from_disk` instead. These functions preserve all data types.\r\n```python\r\n>>> orig_conll.save_to_disk(\"ner_conll\")\r\n\r\n>>> from datasets import load_from_disk\r\n\r\n>>> new_conll = load_from_disk(\"ner_conll\")\r\n>>> new_conll\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 14042\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3251\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3454\r\n })\r\n})\r\n>>> new_conll[\"train\"][0]\r\n{'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],\r\n 'id': '0',\r\n 'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0],\r\n 'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],\r\n 'tokens': ['EU',\r\n 'rejects',\r\n 'German',\r\n 'call',\r\n 'to',\r\n 'boycott',\r\n 'British',\r\n 'lamb',\r\n '.']}\r\n>>> new_conll[\"train\"].features\r\n{'chunk_tags': Sequence(feature=ClassLabel(num_classes=23, names=['O', 'B-ADJP', 'I-ADJP', 'B-ADVP', 'I-ADVP', 'B-CONJP', 'I-CONJP', 'B-INTJ', 'I-INTJ', 'B-LST', 'I-LST', 'B-NP', 'I-NP', 'B-PP', 'I-PP', 'B-PRT', 'I-PRT', 'B-SBAR', 'I-SBAR', 'B-UCP', 'I-UCP', 'B-VP', 'I-VP'], id=None), length=-1, id=None),\r\n 'id': Value(dtype='string', id=None),\r\n 'ner_tags': Sequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], id=None), length=-1, id=None),\r\n 'pos_tags': Sequence(feature=ClassLabel(num_classes=47, names=['\"', \"''\", '#', '$', '(', ')', ',', '.', ':', '``', 'CC', 'CD', 'DT', 'EX', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNP', 'NNPS', 'NNS', 'NN|SYM', 'PDT', 'POS', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'SYM', 'TO', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP', 'WP$', 'WRB'], id=None), length=-1, id=None),\r\n 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}\r\n```","Hi @albertvillanova!\r\n\r\nThanks so much for your suggestions! That worked! "],"created_at":1657560624000,"updated_at":1657632789000,"closed_at":1657632788000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nIf you use:\r\n\r\n`conll_dataset.to_csv(\"ner_conll.csv\")`\r\n\r\nIt will create a csv file with all of your data as expected, however when you load it with:\r\n\r\n`conll_dataset = load_dataset(\"csv\", data_files=\"ner_conll.csv\")` \r\n\r\neverything is read in as a string. For example if I look at everything in 'ner_tags' I get back `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']` instead of what I originally saved which was `[[3, 0, 7, 0, 0, 0, 7, 0, 0], [1, 2], [5, 0]]`\r\n\r\nI think maybe there is something funky going on with the csv delimiter \r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\n#load original conll dataset\r\norig_conll = load_dataset(\"conll2003\")\r\n\r\n#save original conll as a csv \r\norig_conll.to_csv(\"ner_conll.csv\")\r\n\r\n#reload conll data as a csv\r\nnew_conll = load_dataset(\"csv\", data_files=\"ner_conll.csv\")`\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\nI would expect the data be returned as the data type I saved it as. I.e. if I save a list of ints \r\n[[3, 0, 7, 0, 0, 0, 7, 0, 0]], I shouldnt get back a string ['[3 0 7 0 0 0 7 0 0]']\r\nI also get back a string when I pass a list of strings ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']\r\n\r\n## Actual results\r\nA list of strings `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']`\r\nA string \"['EU' 'rejects' 'German' 'call' 'to' 'boycott' 'British' 'lamb' '.']\"\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.18.3\r\n- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.13\r\n- PyArrow version: 8.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4673\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4673\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4672","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4672\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4672\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4672\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4672","id":1300911467,"node_id":"PR_kwDODunzps47NEfV","number":4672,"title":"Support extract 7-zip compressed data files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Cool! Can you please remove `Fix #3541` from the description as this PR doesn't add support for streaming\/`iter_archive`, so it only partially addresses the issue?\r\n\r\nSide note:\r\nI think we can use `libarchive` (`libarchive-c` is a Python package with the bindings) for streaming 7z archives. The only issue with this lib is that it's tricky to install on Windows\/Mac."],"created_at":1657555011000,"updated_at":1657890867000,"closed_at":1657890127000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix partially #3541, fix #4670.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4672\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4672\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4672","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4672","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4672.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4672.patch","merged_at":1657890127000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4671","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4671\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4671\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4671\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4671","id":1300385909,"node_id":"I_kwDODunzps5NglB1","number":4671,"title":"Dataset Viewer issue for wmt16","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @lewtun.\r\n\r\n~We can't load the dataset locally, so I think this is an issue with the loading script (not the viewer).~\r\n\r\n We are investigating...","Recently, there was a merged PR related to this dataset:\r\n- #4554\r\n\r\nWe are looking at this...","Indeed, the above mentioned PR fixed the loading script (it was not working before).\r\n\r\nI'm forcing the refresh of the Viewer.","Please note that the above mentioned PR also made an enhancement in the `datasets` library, required by this loading script. This enhancement will only be available to the Viewer once we make our next release.","OK, it's working now.\r\n\r\nhttps:\/\/huggingface.co\/datasets\/wmt16\/viewer\/ro-en\/test\r\n\r\n\"Capture\r\n","Thank you @severo !!"],"created_at":1657528451000,"updated_at":1663075622000,"closed_at":1662624966000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/wmt16\n\n### Description\n\n[Reported](https:\/\/huggingface.co\/spaces\/autoevaluate\/model-evaluator\/discussions\/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.\r\n\r\n```\r\nStatus code: 400\r\nException: NotImplementedError\r\nMessage: This is a abstract method\r\n```\r\n\r\nThanks!\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4671\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":1},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4671\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4670","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4670\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4670\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4670\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4670","id":1299984246,"node_id":"I_kwDODunzps5NfC92","number":4670,"title":"Can't extract files from `.7z` zipfile using `download_and_extract`","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @bhavitvyamalik, thanks for reporting.\r\n\r\nYes, currently we do not support 7zip archive compression: I think we should.\r\n\r\nAs a workaround, you could uncompress it explicitly, like done in e.g. `samsum` dataset: \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fedf891a08bfc77041d575fad6c26091bc0fce52\/datasets\/samsum\/samsum.py#L106-L110\r\n","Related to this issue: https:\/\/github.com\/huggingface\/datasets\/issues\/3541","Sure, let me look into and check what can be done. Will keep you guys updated here!","Initially, I thought of solving this without any external dependency. Almost everywhere I saw `lzma` can be used for this but there is a caveat that lzma doesn\u2019t work with 7z archives but only single files. In my case the 7z archive has multiple files so it didn't work. Is it fine to use external library here?","Hi @bhavitvyamalik, thanks for your investigation.\r\n\r\nOn Monday, I started a PR that will eventually close this issue as well: I'm linking it to this.\r\n- #4672\r\n\r\nLet me know what you think. "],"created_at":1657477009000,"updated_at":1657890127000,"closed_at":1657890127000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nI'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error:\r\n```\r\n>>> dataset = load_dataset(\".\/datasets\/mantis\/\")\r\nUsing custom data configuration default\r\nDownloading and preparing dataset mantis\/default to \/Users\/bhavitvyamalik\/.cache\/huggingface\/datasets\/mantis\/default\/1.1.0\/611affa804ec53e2055a335cc1b8b213bb5a0b5142d919967729d5ee23c6bab4...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 77.2M\/77.2M [00:23<00:00, 3.28MB\/s]\r\n\/Users\/bhavitvyamalik\/.cache\/huggingface\/datasets\/downloads\/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/Users\/bhavitvyamalik\/Desktop\/work\/hf\/datasets\/src\/datasets\/load.py\", line 1745, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/Users\/bhavitvyamalik\/Desktop\/work\/hf\/datasets\/src\/datasets\/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/Users\/bhavitvyamalik\/Desktop\/work\/hf\/datasets\/src\/datasets\/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 20] Not a directory: '\/Users\/bhavitvyamalik\/.cache\/huggingface\/datasets\/downloads\/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6\/merged_train.json'\r\n```\r\njust before generating the splits. I checked `fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6` file and it's `7z` zip file (similar to downloaded Google drive file) which means it didn't get unzip. Do I need to unzip it separately and then pass the paths for train,dev,test files in `SplitGenerator`?\r\n\r\n## Environment info\r\n- `datasets` version: 1.18.4.dev0\r\n- Platform: Darwin-19.6.0-x86_64-i386-64bit\r\n- Python version: 3.7.8\r\n- PyArrow version: 5.0.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4670\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4670\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4669","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4669\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4669\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4669\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4669","id":1299848003,"node_id":"I_kwDODunzps5NehtD","number":4669,"title":"loading oscar-corpus\/OSCAR-2201 raises an error","user":{"login":"vitalyshalumov","id":33824221,"node_id":"MDQ6VXNlcjMzODI0MjIx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33824221?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vitalyshalumov","html_url":"https:\/\/github.com\/vitalyshalumov","followers_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/followers","following_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/orgs","repos_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/repos","events_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vitalyshalumov\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I had to use the appropriate token for use_auth_token. Thank you."],"created_at":1657436970000,"updated_at":1657531669000,"closed_at":1657531669000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nload_dataset('oscar-2201', 'af')\r\n\r\nraises an error:\r\nTraceback (most recent call last):\r\n File \"\/usr\/lib\/python3.8\/code.py\", line 90, in runcode\r\n exec(code, self.locals)\r\n File \"\", line 1, in \r\n File \"..python3.8\/site-packages\/datasets\/load.py\", line 1656, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"...\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1439, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"...\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1189, in dataset_module_factory\r\n raise FileNotFoundError(\r\nFileNotFoundError: Couldn't find a dataset script at ...\/oscar-2201\/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/oscar-2201\/oscar-2201.py\r\n\r\n\r\nI've tried other permutations such as : \r\noscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True)\r\noscar_22 = load_dataset('oscar-corpus\/OSCAR-2201', 'af',use_auth_token=True)\r\noscar_22 = load_dataset('oscar-2201', 'af')\r\noscar_22 = load_dataset('oscar-corpus\/OSCAR-2201')\r\n\r\n\r\nwith the same unfortunate result.\r\n\r\n\r\n## Steps to reproduce the bug\r\noscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True)\r\noscar_22 = load_dataset('oscar-corpus\/OSCAR-2201', 'af',use_auth_token=True)\r\noscar_22 = load_dataset('oscar-2201', 'af')\r\noscar_22 = load_dataset('oscar-corpus\/OSCAR-2201')\r\n# Sample code to reproduce the bug\r\n```\r\n\r\n## Expected results\r\nloaded data\r\n\r\n## Actual results\r\nTraceback (most recent call last):\r\n File \"\/usr\/lib\/python3.8\/code.py\", line 90, in runcode\r\n exec(code, self.locals)\r\n File \"\", line 1, in \r\n File \"..python3.8\/site-packages\/datasets\/load.py\", line 1656, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"...\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1439, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"...\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1189, in dataset_module_factory\r\n raise FileNotFoundError(\r\nFileNotFoundError: Couldn't find a dataset script at ...\/oscar-2201\/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/oscar-2201\/oscar-2201.py\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4669\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4669\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4668","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4668\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4668\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4668\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4668","id":1299735893,"node_id":"I_kwDODunzps5NeGVV","number":4668,"title":"Dataset Viewer issue for hungnm\/multilingual-amazon-review-sentiment-processed","user":{"login":"hungnmai","id":21364546,"node_id":"MDQ6VXNlcjIxMzY0NTQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21364546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hungnmai","html_url":"https:\/\/github.com\/hungnmai","followers_url":"https:\/\/api.github.com\/users\/hungnmai\/followers","following_url":"https:\/\/api.github.com\/users\/hungnmai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hungnmai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hungnmai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hungnmai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hungnmai\/orgs","repos_url":"https:\/\/api.github.com\/users\/hungnmai\/repos","events_url":"https:\/\/api.github.com\/users\/hungnmai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hungnmai\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["It seems like a private dataset. The viewer is currently not supported on the private datasets."],"created_at":1657389853000,"updated_at":1657525667000,"closed_at":1657525667000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/hungnm\/multilingual-amazon-review-sentiment\n\n### Description\n\n_No response_\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4668\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4668\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4667","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4667\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4667\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4667\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4667","id":1299735703,"node_id":"I_kwDODunzps5NeGSX","number":4667,"title":"Dataset Viewer issue for hungnm\/multilingual-amazon-review-sentiment-processed","user":{"login":"hungnmai","id":21364546,"node_id":"MDQ6VXNlcjIxMzY0NTQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21364546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hungnmai","html_url":"https:\/\/github.com\/hungnmai","followers_url":"https:\/\/api.github.com\/users\/hungnmai\/followers","following_url":"https:\/\/api.github.com\/users\/hungnmai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hungnmai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hungnmai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hungnmai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hungnmai\/orgs","repos_url":"https:\/\/api.github.com\/users\/hungnmai\/repos","events_url":"https:\/\/api.github.com\/users\/hungnmai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hungnmai\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1657389795000,"updated_at":1657525635000,"closed_at":1657525635000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4667\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4667\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4666","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4666\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4666\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4666\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4666","id":1299732238,"node_id":"I_kwDODunzps5NeFcO","number":4666,"title":"Issues with concatenating datasets","user":{"login":"ChenghaoMou","id":32014649,"node_id":"MDQ6VXNlcjMyMDE0NjQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32014649?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ChenghaoMou","html_url":"https:\/\/github.com\/ChenghaoMou","followers_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/followers","following_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/orgs","repos_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/repos","events_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I agree we should improve the features equality checks to account for this particular case. However, your code fails due to `answer_start` having the dtype `int64` instead of `int32` after loading from JSON (it's not possible to embed type precision info into a JSON file; `save_to_disk` does that for arrow files), which would lead to the concatenation error as PyArrow does not support this sort of type promotion. This can be fixed as follows:\r\n```python\r\ntemp = load_dataset(\"json\", data_files={\"train\": \"output.jsonl\"}, features=squad[\"train\"].features)\r\n``` ","That makes sense. I totally missed the `int64` and `int32` part. Thanks for pointing it out! Will close this issue for now."],"created_at":1657388714000,"updated_at":1657646175000,"closed_at":1657646174000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nIt is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted.\r\n\r\n> A [datasets.Sequence](https:\/\/huggingface.co\/docs\/datasets\/v2.3.2\/en\/package_reference\/main_classes#datasets.Sequence) with a internal dictionary feature will be automatically converted into a dictionary of lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be un-wanted in some cases. If you don\u2019t want this behavior, you can use a python list instead of the [datasets.Sequence](https:\/\/huggingface.co\/docs\/datasets\/v2.3.2\/en\/package_reference\/main_classes#datasets.Sequence).\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import concatenate_datasets, load_dataset\r\n\r\nsquad = load_dataset(\"squad_v2\")\r\nsquad[\"train\"].to_json(\"output.jsonl\", lines=True)\r\n\r\ntemp = load_dataset(\"json\", data_files={\"train\": \"output.jsonl\"})\r\nconcatenate_datasets([temp[\"train\"], squad[\"train\"]])\r\n```\r\n\r\n## Expected results\r\nNo error executing that code\r\n\r\n## Actual results\r\n```\r\nValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)} or Value(\"null\").\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: macOS-12.4-arm64-arm-64bit\r\n- Python version: 3.8.11\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4666\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4666\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4665","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4665\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4665\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4665\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4665","id":1299652638,"node_id":"I_kwDODunzps5NdyAe","number":4665,"title":"Unable to create dataset having Python dataset script only","user":{"login":"aleSuglia","id":1479733,"node_id":"MDQ6VXNlcjE0Nzk3MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1479733?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aleSuglia","html_url":"https:\/\/github.com\/aleSuglia","followers_url":"https:\/\/api.github.com\/users\/aleSuglia\/followers","following_url":"https:\/\/api.github.com\/users\/aleSuglia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aleSuglia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aleSuglia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aleSuglia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aleSuglia\/orgs","repos_url":"https:\/\/api.github.com\/users\/aleSuglia\/repos","events_url":"https:\/\/api.github.com\/users\/aleSuglia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aleSuglia\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @aleSuglia, thanks for reporting.\r\n\r\nWe are having a look at it. \r\n\r\nWe transfer this issue to the Community tab of the corresponding Hub dataset: https:\/\/huggingface.co\/datasets\/Heriot-WattUniversity\/dialog-babi\/discussions"],"created_at":1657367146000,"updated_at":1657523409000,"closed_at":1657523401000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nHi there,\r\n\r\nI'm trying to add the following dataset to Huggingface datasets: https:\/\/huggingface.co\/datasets\/Heriot-WattUniversity\/dialog-babi\/blob\/\r\n\r\nI'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo already):\r\n```\r\ndatasets-cli test Heriot-WattUniversity\/dialog-babi\/dialog_babi.py --save_infos --all-configs\r\n```\r\nwhile it errors when I remove the python script:\r\n```\r\ndatasets-cli test Heriot-WattUniversity\/dialog-babi\/ --save_infos --all-configs \r\n```\r\nThe error message is the following:\r\n```\r\nFileNotFoundError: Unable to resolve any data file that matches '['**']' at \/Users\/as2180\/workspace\/Heriot-WattUniversity\/dialog-babi with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: macOS-12.4-arm64-arm-64bit\r\n- Python version: 3.9.9\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4665\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4665\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4664","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4664\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4664\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4664\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4664","id":1299571212,"node_id":"PR_kwDODunzps47IvfG","number":4664,"title":"Add stanford dog dataset","user":{"login":"khushmeeet","id":8711912,"node_id":"MDQ6VXNlcjg3MTE5MTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8711912?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/khushmeeet","html_url":"https:\/\/github.com\/khushmeeet","followers_url":"https:\/\/api.github.com\/users\/khushmeeet\/followers","following_url":"https:\/\/api.github.com\/users\/khushmeeet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/khushmeeet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/khushmeeet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/khushmeeet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/khushmeeet\/orgs","repos_url":"https:\/\/api.github.com\/users\/khushmeeet\/repos","events_url":"https:\/\/api.github.com\/users\/khushmeeet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/khushmeeet\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hi @khushmeeet, thanks for your contribution.\r\n\r\nBut wouldn't it be better to add this dataset to the Hub? \r\n- https:\/\/huggingface.co\/docs\/datasets\/share\r\n- https:\/\/huggingface.co\/docs\/datasets\/dataset_script","Hi @albertvillanova \r\n\r\nDataset is added to Hub - https:\/\/huggingface.co\/datasets\/dgrnd4\/stanford_dog_dataset","Great, so I guess we can close this issue, as the dataset is already available on the Hub.","OK I read the discussion on:\r\n- #4504\r\n\r\nCurrently, priority is adding datasets to the Hub, not here on GitHub.\r\n\r\nIf you would like to contribute the loading script and all the metadata you generated (README + JSON files), you could:\r\n- Either make a PR to the existing dataset on the Hub\r\n- Create a new dataset on the Hub:\r\n - Either under your personal namespace\r\n - or even more professionally, under the namespace `stanfordSVL` (Stanford Vision and Learning Lab: https:\/\/svl.stanford.edu\/)\r\n\r\nYou can use the Community tab to ping us if you need help or have any questions."],"created_at":1657341967000,"updated_at":1657891832000,"closed_at":1657890942000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR is for adding dataset, related to issue #4504.\r\n\r\nWe are adding Stanford dog breed dataset. It is a multi class image classification dataset. \r\nDetails can be found here - http:\/\/vision.stanford.edu\/aditya86\/ImageNetDogs\/\r\n\r\nTests on dummy data is failing currently, which I am looking into.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4664\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4664\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4664","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4664","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4664.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4664.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4663","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4663\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4663\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4663\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4663","id":1299298693,"node_id":"PR_kwDODunzps47H19n","number":4663,"title":"Add text decorators","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657302708000,"updated_at":1658169194000,"closed_at":1658168449000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds some decoration to text about different modalities to make it more obvious separate guides exist for audio, vision, and text. The goal is to make it easier for users to discover these guides!\r\n\r\n![underline](https:\/\/user-images.githubusercontent.com\/59462357\/178044392-9596693e-9a4a-479a-a282-f1edbd90be1a.png)\r\n\r\nTODO:\r\n\r\n- [x] Open PR to support new Tailwind classes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4663\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4663\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4663","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4663","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4663.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4663.patch","merged_at":1658168449000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4662","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4662\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4662\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4662\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4662","id":1298845369,"node_id":"PR_kwDODunzps47GTEc","number":4662,"title":"Fix: conll2003 - fix empty example","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657277353000,"updated_at":1657289693000,"closed_at":1657288962000,"author_association":"MEMBER","active_lock_reason":null,"body":"As reported in https:\/\/huggingface.co\/datasets\/conll2003\/discussions\/2#62c45a14f93fc97e8260532f, there was an extra empty example at the end of the dataset","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4662\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4662\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4662","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4662","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4662.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4662.patch","merged_at":1657288962000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4661","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4661\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4661\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4661\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4661","id":1298374944,"node_id":"I_kwDODunzps5NY6Eg","number":4661,"title":"Concurrency bug when using same cache among several jobs","user":{"login":"ioana-blue","id":17202292,"node_id":"MDQ6VXNlcjE3MjAyMjky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17202292?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ioana-blue","html_url":"https:\/\/github.com\/ioana-blue","followers_url":"https:\/\/api.github.com\/users\/ioana-blue\/followers","following_url":"https:\/\/api.github.com\/users\/ioana-blue\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ioana-blue\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ioana-blue\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ioana-blue\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ioana-blue\/orgs","repos_url":"https:\/\/api.github.com\/users\/ioana-blue\/repos","events_url":"https:\/\/api.github.com\/users\/ioana-blue\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ioana-blue\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I can confirm that if I run one job first that processes the dataset, then I can run any jobs in parallel with no problem (no write-concurrency anymore...). ","Hi! That's weird. It seems like the error points to the `mkstemp` function, but the official docs state the following:\r\n```\r\nThere are no race conditions in the file\u2019s creation, assuming that the platform properly implements the [os.O_EXCL](https:\/\/docs.python.org\/3\/library\/os.html#os.O_EXCL) flag for [os.open()](https:\/\/docs.python.org\/3\/library\/os.html#os.open)\r\n```\r\nSo this could mean your platform doesn't support that flag.\r\n\r\n~~Can you please check if wrapping the temp file creation (the line `tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)` in `_map_single`) with the `multiprocess.Lock` fixes the issue?~~\r\nPerhaps wrapping the temp file creation in `_map_single` with `filelock` could work:\r\n```python\r\nwith FileLock(lock_path):\r\n tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)\r\n```\r\nCan you please check if that helps?"],"created_at":1657245491000,"updated_at":1657905083000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI used to see this bug with an older version of the datasets. It seems to persist. \r\n\r\nThis is my concrete scenario: I launch several evaluation jobs on a cluster in which I share the file system and I share the cache directory used by huggingface libraries. The evaluation jobs read the same *.csv files. If my jobs get all scheduled pretty much at the same time, there are all kinds of weird concurrency errors. Sometime it crashes silently. This time I got lucky that it crashed with a stack trace that I can share and maybe you get to the bottom of this. If you don't have a similar setup available, it may be hard to reproduce as you really need two jobs accessing the same file at the same time to see this type of bug. \r\n\r\n## Steps to reproduce the bug\r\nI'm running a modified version of `run_glue.py` script adapted to my use case. I've seen the same problem when running some glue datasets as well (so it's not specific to loading the datasets from csv files). \r\n\r\n## Expected results\r\nNo crash, concurrent access to the (intermediate) files just fine.\r\n\r\n## Actual results\r\nCrashes due to races\/concurrency bugs. \r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.1.0\r\n\r\nStack trace that I just got with the crash (I've obfuscated some names, it should still be quite informative):\r\n\r\n```\r\nRunning tokenizer on dataset: 0%| | 0\/3 [00:00\r\n main()\r\n File \"..\/..\/src\/models\/\/run_*******.py\", line 444, in main\r\n raw_datasets = raw_datasets.map(\r\n File \"\/*******\/\/envs\/tr-crt\/lib\/python3.8\/site-packages\/datasets\/dataset_dict.py\", line 770, in map\r\n {\r\n File \"\/*******\/\/envs\/tr-crt\/lib\/python3.8\/site-packages\/datasets\/dataset_dict.py\", line 771, in \r\n k: dataset.map(\r\n File \"\/*******\/\/envs\/tr-crt\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 2376, in map\r\n return self._map_single(\r\n File \"\/*******\/envs\/tr-crt\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 551, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/*******\/\/envs\/tr-crt\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 518, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/*******\/envs\/tr-crt\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py\", line 458, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/*******\/\/envs\/tr-crt\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 2776, in _map_single\r\n buf_writer, writer, tmp_file = init_buffer_and_writer()\r\n File \"\/*******\/\/envs\/tr-crt\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 2696, in init_buffer_and_writer\r\n tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)\r\n File \"\/*******\/\/envs\/tr-crt\/lib\/python3.8\/tempfile.py\", line 541, in NamedTemporaryFile\r\n (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)\r\n File \"\/*******\/\/envs\/tr-crt\/lib\/python3.8\/tempfile.py\", line 250, in _mkstemp_inner\r\n fd = _os.open(file, flags, 0o600)\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/*******\/cache-transformers\/\/transformers\/csv\/default-ef9cd184210742a7\/0.0.0\/51cce309a08df9c4d82ffd9363bbe090bf173197fc01a71b034e8594995a1a58\/tmps8l6j5yc'\r\n```\r\n\r\nAs I ran 100s of experiments last year for an empirical paper, I ran into this type of bugs several times. I found several bandaid\/work-arounds, e.g., run one job first that caches the dataset => eliminate concurrency; OR use unique caches => eliminate concurrency (but increase storage space), etc. and it all works fine. \r\n\r\nI'd like to help you fixing this bug as it's really annoying to always apply the work arounds. Let me know what other info from my side could help you figure out the issue.\r\n\r\nThanks for your help!\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4661\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":1},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4661\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4660","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4660\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4660\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4660\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4660","id":1297128387,"node_id":"PR_kwDODunzps47AYDq","number":4660,"title":"Fix _resolve_single_pattern_locally on Windows with multiple drives","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Good catch ! Sorry I forgot (again) about windows paths when writing this x)"],"created_at":1657187850000,"updated_at":1657213416000,"closed_at":1657212727000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, when `_resolve_single_pattern_locally` is called from a different drive than the one in `pattern`, it raises an exception:\r\n```\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\nC:\\hostedtoolcache\\windows\\Python\\3.6.8\\x64\\lib\\site-packages\\datasets\\io\\parquet.py:35: in __init__\r\n **kwargs,\r\nC:\\hostedtoolcache\\windows\\Python\\3.6.8\\x64\\lib\\site-packages\\datasets\\builder.py:287: in __init__\r\n sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token\r\nC:\\hostedtoolcache\\windows\\Python\\3.6.8\\x64\\lib\\site-packages\\datasets\\data_files.py:761: in from_local_or_remote\r\n if not isinstance(patterns_for_key, DataFilesList)\r\nC:\\hostedtoolcache\\windows\\Python\\3.6.8\\x64\\lib\\site-packages\\datasets\\data_files.py:723: in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\nC:\\hostedtoolcache\\windows\\Python\\3.6.8\\x64\\lib\\site-packages\\datasets\\data_files.py:321: in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\nC:\\hostedtoolcache\\windows\\Python\\3.6.8\\x64\\lib\\site-packages\\datasets\\data_files.py:239: in _resolve_single_pattern_locally\r\n for filepath in glob_iter\r\nC:\\hostedtoolcache\\windows\\Python\\3.6.8\\x64\\lib\\site-packages\\datasets\\data_files.py:242: in \r\n os.path.relpath(filepath, base_path), os.path.relpath(pattern, base_path)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\npath = 'C:\\\\Users\\\\runneradmin\\\\AppData\\\\Local\\\\Temp\\\\pytest-of-runneradmin\\\\pytest-0\\\\popen-gw0\\\\data6\\\\dataset.parquet'\r\nstart = '\/'\r\n\r\n...\r\n\r\nE ValueError: path is on mount 'C:', start on mount 'D:'\r\n```\r\n\r\nThis PR makes sure that `base_path` is in the same drive as `pattern`.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4660\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4660\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4660","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4660","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4660.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4660.patch","merged_at":1657212727000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4659","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4659\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4659\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4659\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4659","id":1297094140,"node_id":"PR_kwDODunzps47AQo9","number":4659,"title":"Transfer CI to GitHub Actions","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks a lot @albertvillanova ! I hope we're finally done with flakiness on windows ^^\r\n\r\nAlso thanks for paying extra attention to billing and avoiding running unnecessary jobs. Though for certain aspects (see my comments), I think it's worth having the extra jobs to make our life easier","~@lhoestq I think you forgot to add your comments?~\r\n\r\nI had missed it among all the other comments...","@lhoestq, I'm specially enthusiastic with the fail-fast policy: it was in my TODO list for a long time. I really think it will have a positive impact (I would love to know the spent time saving it will enable, besides the carbon footprint reduction). :wink: \r\n\r\nSo yes, as you said above, let's give it a try at least. If we encounter any inconvenience, we can easily disable it.\r\n\r\nQuestion: I guess I have to disable CircleCI CI before merging this PR?\r\n\r\n"],"created_at":1657186187000,"updated_at":1657625420000,"closed_at":1657624705000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR transfers CI from CircleCI to GitHub Actions. The implementation in GitHub Actions tries to be as faithful as possible to the implementation in CircleCI and get the same output results (exceptions below). \r\n\r\n**IMPORTANT NOTE**: The fast-fail policy (described below) is not finally implemented, so that:\r\n- we can continue merging PRs with CI in red because of some random error returned by the Hub\r\n- it is not annoying for maintainers to have to relaunch failed CI jobs\r\n\r\nSee comments here: https:\/\/github.com\/huggingface\/datasets\/pull\/4659#discussion_r918802348\r\n\r\nDifferences in the implementation in GitHub Actions compared to the CircleCI one:\r\n- This PR introduces some *fail-fast* mechanisms to significantly reduce the total time CI is running, both because of environmental impact and because CI in GitHub Actions billing depends on the minutes per month running time (see [About billing for GitHub Actions](https:\/\/docs.github.com\/en\/billing\/managing-billing-for-github-actions\/about-billing-for-github-actions)):\r\n - All tests *depend* on `check_code_quality` job: only if `check_code_quality` passes, then the other test jobs are launched\r\n - The tests are implemented with a matrix strategy (cross-product: OS and PyArrow versions) and fail-fast: if any of the 4 processes fails, the others are cancelled\r\n- OS dependencies for Linux (see table below)\r\n\r\n | OS dependencies | Passed tests | Skipped tests |\r\n | --- | ---: | ---: |\r\n | libsndfile1-dev | 4786 | 3119 |\r\n | libsndfile1 | 4786 | 3119 |\r\n | libsndfile1, sox | 4788 | 3117 |\r\n\r\n - This PR replaces `libsndfile1-dev` with `libsndfile1`: the same number of passing tests but less packages installed\r\n - This PR adds `sox`: required by MP3 tests (2 more tests are passed: 4788 instead of 4786)\r\n- For tests using PyArrow 6, this PR uses 6.0.1 instead of 6.0.0\r\n\r\nTO DO:\r\n- [ ] Remove old CircleCI CI: kept for the moment to compare stability and performance\r\n\r\nClose #4658.\r\n\r\n## Comparison between CircleCI and GitHub Actions\r\n\r\n| | | CircleCI | GitHub Actions |\r\n| --- | --- | ---: | ---: |\r\n| Ubuntu, pyarrow-latest ||||\r\n|| Passed tests | 4786 | 4788 |\r\n|| Duration | 11m 0s | 10m 10s |\r\n| Windows, pyarrow-latest ||||\r\n|| Passed tests | 4783 | 4783 |\r\n|| Duration | 29m 59s | 22m 56s |","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4659\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4659\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4659","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4659","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4659.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4659.patch","merged_at":1657624705000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4658","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4658\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4658\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4658\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4658","id":1297001390,"node_id":"I_kwDODunzps5NTquu","number":4658,"title":"Transfer CI tests to GitHub Actions","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1657181450000,"updated_at":1657624705000,"closed_at":1657624705000,"author_association":"MEMBER","active_lock_reason":null,"body":"Let's try CI tests using GitHub Actions to see if they are more stable than on CircleCI.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4658\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4658\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4657","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4657\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4657\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4657\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4657","id":1296743133,"node_id":"I_kwDODunzps5NSrrd","number":4657,"title":"Add SQuAD2.0 Dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey, It's already present [here](https:\/\/huggingface.co\/datasets\/squad_v2) ","Hi! This dataset is indeed already available on the Hub. Closing."],"created_at":1657163976000,"updated_at":1657642492000,"closed_at":1657642492000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *SQuAD2.0*\r\n- **Description:** *Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.*\r\n- **Paper:** *https:\/\/aclanthology.org\/P18-2124.pdf*\r\n- **Data:** *https:\/\/rajpurkar.github.io\/SQuAD-explorer\/*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4657\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4657\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4656","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4656\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4656\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4656\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4656","id":1296740266,"node_id":"I_kwDODunzps5NSq-q","number":4656,"title":"Add Amazon-QA Dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/Amazon-QA)."],"created_at":1657163711000,"updated_at":1657765212000,"closed_at":1657765212000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Amazon-QA*\r\n- **Description:** *The dataset is .jsonl format, where each line in the file is a json string that corresponds to a question, existing answers to the question and the extracted review snippets (relevant to the question).*\r\n- **Paper:** *https:\/\/github.com\/amazonqa\/amazonqa\/tree\/master\/paper*\r\n- **Data:** *https:\/\/huggingface.co\/datasets\/sentence-transformers\/embedding-training-data\/resolve\/main\/amazon-qa.jsonl.gz*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4656\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4656\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4655","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4655\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4655\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4655\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4655","id":1296720896,"node_id":"I_kwDODunzps5NSmQA","number":4655,"title":"Simple Wikipedia","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/simple-wiki)."],"created_at":1657162286000,"updated_at":1657764993000,"closed_at":1657764993000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Simple Wikipedia*\r\n- **Description:** *Two different versions of the data set now exist. Both were generated by aligning Simple English Wikipedia and English Wikipedia. A complete description of the extraction process can be found in \"Simple English Wikipedia: A New Simplification Task\", William Coster and David Kauchak (2011).*\r\n- **Paper:** *https:\/\/aclanthology.org\/P11-2117\/*\r\n- **Data:** *https:\/\/huggingface.co\/datasets\/sentence-transformers\/embedding-training-data\/resolve\/main\/SimpleWiki.jsonl.gz*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4655\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4655\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4654","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4654\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4654\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4654\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4654","id":1296716119,"node_id":"I_kwDODunzps5NSlFX","number":4654,"title":"Add Quora Question Triplets Dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/QQP_triplets)."],"created_at":1657161822000,"updated_at":1657764830000,"closed_at":1657764830000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Quora Question Triplets*\r\n- **Description:** *This dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair.*\r\n- **Paper:** \r\n- **Data:** *https:\/\/huggingface.co\/datasets\/sentence-transformers\/embedding-training-data\/resolve\/main\/quora_duplicates_triplets.jsonl.gz*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4654\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4654\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4653","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4653\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4653\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4653\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4653","id":1296702834,"node_id":"I_kwDODunzps5NSh1y","number":4653,"title":"Add Altlex dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/altlex)."],"created_at":1657160582000,"updated_at":1657764759000,"closed_at":1657764759000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Altlex*\r\n- **Description:** *Git repository for software associated with the 2016 ACL paper \"Identifying Causal Relations Using Parallel Wikipedia Articles.\u201d*\r\n- **Paper:** *https:\/\/aclanthology.org\/P16-1135.pdf*\r\n- **Data:** *https:\/\/huggingface.co\/datasets\/sentence-transformers\/embedding-training-data\/resolve\/main\/altlex.jsonl.gz*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4653\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4653\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4652","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4652\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4652\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4652\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4652","id":1296697498,"node_id":"I_kwDODunzps5NSgia","number":4652,"title":"Add Sentence Compression Dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/sentence-compression)."],"created_at":1657160026000,"updated_at":1657764708000,"closed_at":1657764708000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Sentence Compression*\r\n- **Description:** *Large corpus of uncompressed and compressed sentences from news articles.*\r\n- **Paper:** *https:\/\/www.aclweb.org\/anthology\/D13-1155\/*\r\n- **Data:** *https:\/\/github.com\/google-research-datasets\/sentence-compression\/tree\/master\/data*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4652\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4652\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4651","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4651\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4651\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4651\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4651","id":1296689414,"node_id":"I_kwDODunzps5NSekG","number":4651,"title":"Add Flickr 30k Dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/flickr30k-captions)."],"created_at":1657159148000,"updated_at":1657764585000,"closed_at":1657764585000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Flickr 30k*\r\n- **Description:** *To produce the denotation graph, we have created an image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of our previous Flickr 8k Dataset. The new images and captions focus on people involved in everyday activities and events.*\r\n- **Paper:** *https:\/\/transacl.org\/ojs\/index.php\/tacl\/article\/view\/229\/33*\r\n- **Data:** *https:\/\/huggingface.co\/datasets\/sentence-transformers\/embedding-training-data\/resolve\/main\/flickr30k_captions.jsonl.gz*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4651\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4651\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4650","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4650\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4650\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4650\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4650","id":1296680037,"node_id":"I_kwDODunzps5NScRl","number":4650,"title":"Add SPECTER dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/SPECTER)"],"created_at":1657158092000,"updated_at":1657764469000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *SPECTER*\r\n- **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers*\r\n- **Paper:** *https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.207*\r\n- **Data:** *https:\/\/huggingface.co\/datasets\/sentence-transformers\/embedding-training-data\/resolve\/main\/specter_train_triples.jsonl.gz*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4650\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4650\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4649","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4649\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4649\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4649\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4649","id":1296673712,"node_id":"I_kwDODunzps5NSauw","number":4649,"title":"Add PAQ dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/PAQ_pairs)"],"created_at":1657157382000,"updated_at":1657764387000,"closed_at":1657764387000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *PAQ*\r\n- **Description:** *This repository contains code and models to support the research paper\u00a0PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them*\r\n- **Paper:** *https:\/\/arxiv.org\/abs\/2102.07033*\r\n- **Data:** *https:\/\/huggingface.co\/datasets\/sentence-transformers\/embedding-training-data\/resolve\/main\/PAQ_pairs.jsonl.gz*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4649\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4649\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4648","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4648\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4648\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4648\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4648","id":1296659335,"node_id":"I_kwDODunzps5NSXOH","number":4648,"title":"Add WikiAnswers dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["uploaded dataset [here](https:\/\/huggingface.co\/datasets\/embedding-data\/WikiAnswers)"],"created_at":1657155997000,"updated_at":1657764220000,"closed_at":1657764220000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *WikiAnswers*\r\n- **Description:** *The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users.*\r\n- **Paper:** *https:\/\/dl.acm.org\/doi\/10.1145\/2623330.2623677*\r\n- **Data:** *https:\/\/github.com\/afader\/oqa#wikianswers-corpus*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4648\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4648\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4647","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4647\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4647\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4647\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4647","id":1296311270,"node_id":"I_kwDODunzps5NRCPm","number":4647,"title":"Add Reddit dataset","user":{"login":"omarespejel","id":4755430,"node_id":"MDQ6VXNlcjQ3NTU0MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4755430?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omarespejel","html_url":"https:\/\/github.com\/omarespejel","followers_url":"https:\/\/api.github.com\/users\/omarespejel\/followers","following_url":"https:\/\/api.github.com\/users\/omarespejel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omarespejel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omarespejel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omarespejel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omarespejel\/orgs","repos_url":"https:\/\/api.github.com\/users\/omarespejel\/repos","events_url":"https:\/\/api.github.com\/users\/omarespejel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omarespejel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1657136958000,"updated_at":1657136958000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Reddit comments (2015-2018)*\r\n- **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using the tools in this directory.*\r\n- **Paper:** *https:\/\/arxiv.org\/abs\/1904.06472*\r\n- **Data:** *https:\/\/github.com\/PolyAI-LDN\/conversational-datasets\/tree\/master\/reddit*\r\n- **Motivation:** *Dataset for training and evaluating models of conversational response*\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4647\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4647\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4645","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4645\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4645\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4645\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4645","id":1296027785,"node_id":"PR_kwDODunzps468oZ6","number":4645,"title":"Set HF_SCRIPTS_VERSION to main","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657122201000,"updated_at":1657122981000,"closed_at":1657122305000,"author_association":"MEMBER","active_lock_reason":null,"body":"After renaming \"master\" to \"main\", the CI fails with\r\n```\r\nAssertionError: 'https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/main\/datasets\/_dummy\/_dummy.py' not found in \"Couldn't find a dataset script at \/home\/circleci\/datasets\/_dummy\/_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/_dummy\/_dummy.py\"\r\n```\r\n\r\nThis is because in the CI we were still using `HF_SCRIPTS_VERSION=master`. I changed it to \"main\"","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4645\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4645\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4645","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4645","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4645.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4645.patch","merged_at":1657122305000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4644","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4644\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4644\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4644\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4644","id":1296018052,"node_id":"PR_kwDODunzps468mQb","number":4644,"title":"[Minor fix] Typo correction","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1657121822000,"updated_at":1657122992000,"closed_at":1657122316000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"recieve -> receive","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4644\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4644\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4644","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4644","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4644.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4644.patch","merged_at":1657122316000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4643","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4643\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4643\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4643\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4643","id":1295852650,"node_id":"PR_kwDODunzps468Cqk","number":4643,"title":"Rename master to main","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","All the mentions I found on google were simple URLs that will be redirected, so it's fine. I also checked the spaces and we should be good:\r\n- dalle-mini used to install the master branch but [it's no longer the case](https:\/\/huggingface.co\/spaces\/flax-community\/dalle-mini\/commit\/b78c972afd5c2d2bed087be6479fe5c9c6cfa741)\r\n- same for [logo generator](https:\/\/huggingface.co\/spaces\/tom-doerr\/logo_generator\/commit\/a9ea330e518870d0ca8f65abb56f71d86750d8e4)\r\n- I opened a PR to fix [vision-datasets-viewer](https:\/\/huggingface.co\/spaces\/nateraw\/vision-datasets-viewer\/discussions\/1)\r\n","Ok let's rename the branch, and then we can merge this PR"],"created_at":1657114470000,"updated_at":1657121806000,"closed_at":1657121108000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR renames mentions of \"master\" by \"main\" in the code base for several cases:\r\n- set the default dataset script version to \"main\" if the local installation of `datasets` is a dev installation\r\n- update URLs to this github repository to use \"main\"\r\n- update the DVC benchmark\r\n- update the github workflows\r\n- update docstrings\r\n- update tests to compare the changes in dataset cards against \"main\"\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4643\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4643\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4643","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4643","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4643.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4643.patch","merged_at":1657121108000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4642","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4642\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4642\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4642\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4642","id":1295748083,"node_id":"I_kwDODunzps5NO4vz","number":4642,"title":"Streaming issue for ccdv\/pubmed-summarization","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @lewtun.\r\n\r\nI confirm there is an issue with streaming: it does not stream locally. ","Oh, after investigation, the source of the issue is in the Hub dataset loading script.\r\n\r\nI'm opening a PR on the Hub dataset.","I've opened a PR on their Hub dataset to support streaming: https:\/\/huggingface.co\/datasets\/ccdv\/pubmed-summarization\/discussions\/2"],"created_at":1657109587000,"updated_at":1657117054000,"closed_at":1657117054000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/ccdv\/pubmed-summarization\n\n### Description\n\nThis was reported by a [user of AutoTrain Evaluate](https:\/\/huggingface.co\/spaces\/autoevaluate\/model-evaluator\/discussions\/7). It seems like streaming doesn't work due to the way the dataset loading script is defined?\r\n\r\n```\r\nStatus code: 400\r\nException: FileNotFoundError\r\nMessage: https:\/\/huggingface.co\/datasets\/ccdv\/pubmed-summarization\/resolve\/main\/train.zip\/train.txt\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4642\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4642\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4641","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4641\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4641\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4641\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4641","id":1295633250,"node_id":"I_kwDODunzps5NOcti","number":4641,"title":"Dataset Viewer issue for kmfoda\/booksum","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @lewtun.\r\n\r\nIt works locally in streaming mode:\r\n```\r\n{'bid': 27681,\r\n 'is_aggregate': True,\r\n 'source': 'cliffnotes',\r\n 'chapter_path': 'all_chapterized_books\/27681-chapters\/chapters_1_to_2.txt',\r\n 'summary_path': 'finished_summaries\/cliffnotes\/The Last of the Mohicans\/section_1_part_0.txt',\r\n 'book_id': 'The Last of the Mohicans.chapters 1-2',\r\n 'summary_id': 'chapters 1-2',\r\n 'content': None,\r\n 'summary': '{\"name\": \"Chapters 1-2\", \"url\": \"https:\/\/web.archive.org\/web\/20201101053205\/https:\/\/www.cliffsnotes.com\/literature\/l\/the-last-of-the-mohicans\/summary-and-analysis\/chapters-12\", \"summary\": \"Before any characters appear, the time and geography are made clear. Though it is the last war that England and France waged for a country that neither would retain, the wilderness between the forces still has to be...\r\n```\r\n\r\nI'm forcing the refresh of the preview. ","The preview appears as expected once the refresh forced.","Thank you @albertvillanova \ud83e\udd17 !"],"created_at":1657103896000,"updated_at":1657113928000,"closed_at":1657108686000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/kmfoda\/booksum\n\n### Description\n\nA [user of AutoTrain Evaluate](https:\/\/huggingface.co\/spaces\/autoevaluate\/model-evaluator\/discussions\/9) discovered this dataset cannot be streamed due to:\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 401, message='Unauthorized', url=URL('https:\/\/huggingface.co\/datasets\/kmfoda\/booksum\/resolve\/47953f583d6967f086cb16a2f4d2346e9834024d\/test.csv')\r\n```\r\n\r\nI'm not sure why it says \"Unauthorized\" since it's just a bunch of CSV files in a repo \n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4641\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4641\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4640","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4640\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4640\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4640\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4640","id":1295495699,"node_id":"PR_kwDODunzps4660rI","number":4640,"title":"Support all split in streaming mode","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4640). All of your documentation changes will be reflected on that endpoint."],"created_at":1657097798000,"updated_at":1657120795000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix #4637.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4640\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4640\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4640","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4640","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4640.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4640.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4639","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4639\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4639\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4639\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4639","id":1295367322,"node_id":"I_kwDODunzps5NNbya","number":4639,"title":"Add HaGRID -- HAnd Gesture Recognition Image Dataset","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1657093292000,"updated_at":1657093292000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** HaGRID -- HAnd Gesture Recognition Image Dataset\r\n- **Description:** We introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc.\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2206.08219\r\n- **Data:** https:\/\/github.com\/hukenovs\/hagrid\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4639\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4639\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4638","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4638\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4638\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4638\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4638","id":1295233315,"node_id":"PR_kwDODunzps4656H9","number":4638,"title":"The speechocean762 dataset","user":{"login":"jimbozhang","id":1777456,"node_id":"MDQ6VXNlcjE3Nzc0NTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1777456?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jimbozhang","html_url":"https:\/\/github.com\/jimbozhang","followers_url":"https:\/\/api.github.com\/users\/jimbozhang\/followers","following_url":"https:\/\/api.github.com\/users\/jimbozhang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jimbozhang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jimbozhang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jimbozhang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jimbozhang\/orgs","repos_url":"https:\/\/api.github.com\/users\/jimbozhang\/repos","events_url":"https:\/\/api.github.com\/users\/jimbozhang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jimbozhang\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["CircleCL reported two errors, but I didn't find the reason. The error message:\r\n```\r\n_________________ ERROR collecting tests\/test_dataset_cards.py _________________\r\ntests\/test_dataset_cards.py:53: in \r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\ntests\/test_dataset_cards.py:35: in get_changed_datasets\r\n diff_output = check_output([\"git\", \"diff\", \"--name-only\", \"origin\/master...HEAD\"], cwd=repo_path)\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/subprocess.py:356: in check_output\r\n **kwargs).stdout\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/subprocess.py:438: in run\r\n output=stdout, stderr=stderr)\r\nE subprocess.CalledProcessError: Command '['git', 'diff', '--name-only', 'origin\/master...HEAD']' returned non-zero exit status 128.\r\n\r\n=========================== short test summary info ============================\r\nERROR tests\/test_dataset_cards.py - subprocess.CalledProcessError: Command '[...\r\nERROR tests\/test_dataset_cards.py - subprocess.CalledProcessError: Command '[...\r\n= 4011 passed, 2357 skipped, 2 xfailed, 1 xpassed, 116 warnings, 2 errors in 284.32s (0:04:44) =\r\n\r\nExited with code exit status 1\r\n```\r\nI'm not sure if it was caused by this PR ...\r\n\r\nI ran `tests\/test_dataset_cards.py` in my local environment, and it passed:\r\n```\r\n(venv)$ pytest tests\/test_dataset_cards.py\r\n============================== test session starts ==============================\r\nplatform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: \/home\/zhangjunbo\/src\/datasets\r\nplugins: forked-1.4.0, datadir-1.3.1, xdist-2.5.0\r\ncollected 1531 items\r\n\r\ntests\/test_dataset_cards.py ..... [100%]\r\n======================= 766 passed, 765 skipped in 2.55s ========================\r\n```\r\n","@sanchit-gandhi could you also maybe take a quick look? :-)"],"created_at":1657088250000,"updated_at":1659440774000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"[speechocean762](https:\/\/www.openslr.org\/101\/) is a non-native English corpus for pronunciation scoring tasks. It is free for both commercial and non-commercial use.\r\n\r\nI believe it will be easier to use if it could be available on Hugging Face.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4638\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4638\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4638","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4638","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4638.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4638.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4637","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4637\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4637\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4637\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4637","id":1294818236,"node_id":"I_kwDODunzps5NLVu8","number":4637,"title":"The \"all\" split breaks streaming","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @cakiki.\r\n\r\nYes, this is a bug. We are investigating it.","@albertvillanova Nice! Let me know if it's something I can fix my self; would love to contribtue!","@cakiki I was working on this but if you would like to contribute, go ahead. I will close my PR. ;)\r\n\r\nFor the moment I just pushed the test (to see if it impacts other tests).","It impacted the test `test_generator_based_download_and_prepare` and I have fixed this.\r\n\r\nSo that you can copy the test I implemented in my PR and then implement a fix for this issue that passes the test `tests\/test_builder.py::test_builder_as_streaming_dataset`.","Hi @cakiki are you still interested in working on this? Are you planning to open a PR?","Hi @albertvillanova ! Sorry it took so long; I wanted to spend this weekend working on it."],"created_at":1657058209000,"updated_at":1657893570000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nNot sure if this is a bug or just the way streaming works, but setting `streaming=True` did not work when setting `split=\"all\"`\r\n\r\n## Steps to reproduce the bug\r\nThe following works:\r\n```python\r\nds = load_dataset('super_glue', 'wsc.fixed', split='all')\r\n```\r\nThe following throws `ValueError: Bad split: all. Available splits: ['train', 'validation', 'test']`:\r\n\r\n```python\r\nds = load_dataset('super_glue', 'wsc.fixed', split='all', streaming=True)\r\n```\r\n\r\n## Expected results\r\nAn iterator over all splits.\r\n\r\n## Actual results\r\nI had to do the following to achieve the desired result:\r\n```python\r\nfrom itertools import chain\r\nds = load_dataset('super_glue', 'wsc.fixed', streaming=True)\r\nit = chain.from_iterable(ds.values())\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.5\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4637\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4637\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4636","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4636\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4636\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4636\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4636","id":1294547836,"node_id":"I_kwDODunzps5NKTt8","number":4636,"title":"Add info in docs about behavior of download_config.num_proc","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1657040460000,"updated_at":1659004832000,"closed_at":1659004832000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nI went to override `download_config.num_proc` and was confused about what was happening under the hood. It would be nice to have the behavior documented a bit better so folks know what's happening when they use it.\r\n\r\n**Describe the solution you'd like**\r\n\r\n- Add note about how the default number of workers is 16. Related code:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7bcac0a6a0fc367cc068f184fa132b8de8dfa11d\/src\/datasets\/download\/download_manager.py#L299-L302\r\n\r\n- Add note that if the number of workers is higher than the number of files to download, it won't use multiprocessing.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nmaybe it would also be nice to set `num_proc` = `num_files` when `num_proc` > `num_files`. \r\n\r\n**Additional context**\r\n\r\n...\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4636\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4636\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4635","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4635\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4635\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4635\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4635","id":1294475931,"node_id":"I_kwDODunzps5NKCKb","number":4635,"title":"Dataset Viewer issue for vadis\/sv-ident","user":{"login":"e-tornike","id":20404466,"node_id":"MDQ6VXNlcjIwNDA0NDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20404466?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/e-tornike","html_url":"https:\/\/github.com\/e-tornike","followers_url":"https:\/\/api.github.com\/users\/e-tornike\/followers","following_url":"https:\/\/api.github.com\/users\/e-tornike\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/e-tornike\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/e-tornike\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/e-tornike\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/e-tornike\/orgs","repos_url":"https:\/\/api.github.com\/users\/e-tornike\/repos","events_url":"https:\/\/api.github.com\/users\/e-tornike\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/e-tornike\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @e-tornike \r\n\r\nSome context:\r\n- #4527 \r\n\r\nThe dataset loads locally in streaming mode:\r\n```python\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"vadis\/sv-ident\", split=\"validation\", streaming=True); item = next(iter(ds)); item\r\nUsing custom data configuration default\r\nOut[2]: \r\n{'sentence': 'Im Falle von Umweltbelastungen kann selten eindeutig entschieden werden, ob Unbedenklichkeitswerte bereits erreicht oder \u00fcberschritten sind, die die menschliche Gesundheit oder andere Wohlfahrts\u00bbg\u00fcter\u00ab beeintr\u00e4chtigen.',\r\n 'is_variable': 0,\r\n 'variable': [],\r\n 'research_data': [],\r\n 'doc_id': '51971',\r\n 'uuid': 'ee3d7f88-1a3e-4a59-997f-e986b544a604',\r\n 'lang': 'de'}\r\n```","~~I have forced the refresh of the split in the preview without success.~~\r\n\r\nI have forced the refresh of the split in the preview, and now it works.","Preview seems to work now. \r\n\r\nhttps:\/\/huggingface.co\/datasets\/vadis\/sv-ident\/viewer\/default\/validation","OK, thank you @e-tornike.\r\n\r\nApparently, after forcing the refresh, we just had to wait a little until it is effectively refreshed. ","I'm closing this issue as it was solved after forcing the refresh of the split in the preview.","Thanks a lot! :)"],"created_at":1657036093000,"updated_at":1657091613000,"closed_at":1657091534000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/vadis\/sv-ident\/viewer\/default\/validation\n\n### Description\n\nError message when loading validation split in the viewer:\r\n\r\n```\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The split cache is empty.\r\n```\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4635\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4635\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4634","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4634\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4634\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4634\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4634","id":1294405251,"node_id":"I_kwDODunzps5NJw6D","number":4634,"title":"Can't load the Hausa audio dataset","user":{"login":"moro23","id":19976800,"node_id":"MDQ6VXNlcjE5OTc2ODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19976800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/moro23","html_url":"https:\/\/github.com\/moro23","followers_url":"https:\/\/api.github.com\/users\/moro23\/followers","following_url":"https:\/\/api.github.com\/users\/moro23\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/moro23\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/moro23\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/moro23\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/moro23\/orgs","repos_url":"https:\/\/api.github.com\/users\/moro23\/repos","events_url":"https:\/\/api.github.com\/users\/moro23\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/moro23\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you provide the error details. It is difficult to debug otherwise. Also try other config. `ha` is not a valid."],"created_at":1657032456000,"updated_at":1663078052000,"closed_at":1663078052000,"author_association":"NONE","active_lock_reason":null,"body":"common_voice_train = load_dataset(\"common_voice\", \"ha\", split=\"train+validation\")","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4634\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4634\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4633","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4633\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4633\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4633\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4633","id":1294367783,"node_id":"PR_kwDODunzps462_qX","number":4633,"title":"[data_files] Only match separated split names","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I ran a script to find affected datasets (just did it on non-private non-gated). Adding \"testing\" and \"evaluation\" fixes all of of them except one:\r\n- projecte-aina\/cat_manynames:\thuman_annotated_testset.tsv\r\n\r\nLet me open a PR on their repository to fix it\r\nEDIT: pr [here](https:\/\/huggingface.co\/datasets\/projecte-aina\/cat_manynames\/discussions\/2)","Feel free to merge @albertvillanova if it's all good to you :)","Thanks for the feedback @albertvillanova I took your comments into account :)\r\n- added numbers as supported delimiters\r\n- used list comprehension to create the patterns list\r\n- updated the docs and the tests according to your comments\r\n\r\nLet me know what you think !","I ended up removing the patching and the context manager :) merging"],"created_at":1657030691000,"updated_at":1658150429000,"closed_at":1658149653000,"author_association":"MEMBER","active_lock_reason":null,"body":"As reported in https:\/\/github.com\/huggingface\/datasets\/issues\/4477, the current pattern matching to infer which file goes into which split is too permissive. For example a file \"contest.py\" would be considered part of a test split (it contains \"test\") and \"seqeval.py\" as well (it contains \"eval\").\r\n\r\nIn this PR I made the pattern matching more robust by only matching split names **between separators**. The supported separators are dots, dashes, spaces and underscores.\r\n\r\nI updated the docs accordingly.\r\n\r\nOne detail about the tests: I had to update one test because it was using `PurePath.match` as a reference for globbing, but it doesn't support the `[..]` glob pattern. Therefore I added a `mock_fs` context manager that can be used to easily define a dummy filesystem with certain files in it and run pattern matching tests. Its code comes mostly from test_streaming_download_manager.py\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/4477","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4633\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4633\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4633","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4633","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4633.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4633.patch","merged_at":1658149653000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4632","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4632\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4632\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4632\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4632","id":1294166880,"node_id":"I_kwDODunzps5NI2tg","number":4632,"title":"'sort' method sorts one column only","user":{"login":"shachardon","id":42108562,"node_id":"MDQ6VXNlcjQyMTA4NTYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42108562?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shachardon","html_url":"https:\/\/github.com\/shachardon","followers_url":"https:\/\/api.github.com\/users\/shachardon\/followers","following_url":"https:\/\/api.github.com\/users\/shachardon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shachardon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shachardon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shachardon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shachardon\/orgs","repos_url":"https:\/\/api.github.com\/users\/shachardon\/repos","events_url":"https:\/\/api.github.com\/users\/shachardon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shachardon\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! `ds.sort()` does sort the full dataset, not just one column:\r\n```python\r\nfrom datasets import *\r\n\r\nds = Dataset.from_dict({\"foo\": [3, 2, 1], \"bar\": [\"c\", \"b\", \"a\"]})\r\nprint(d.sort(\"foo\").to_pandas()\r\n# foo bar\r\n# 0 1 a\r\n# 1 2 b\r\n# 2 3 c\r\n```\r\n\r\nWhat made you think it was not the case ? Did you experience a situation where it was only sorting one column ?","Hi! thank you for your quick reply!\r\nI wanted to sort the `cnn_dailymail` dataset by the length of the labels (num of characters). I added a new column to the dataset (`ds.add_column`) with the lengths and then sorted by this new column. Only the new length column was sorted, the reset left in their original order. ","That's unexpected, can you share the code you used to get this ?"],"created_at":1657020326000,"updated_at":1657195592000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4632\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4632\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4631","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4631\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4631\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4631\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4631","id":1293545900,"node_id":"PR_kwDODunzps460Vy0","number":4631,"title":"Update WinoBias README","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656966280000,"updated_at":1657200212000,"closed_at":1657199507000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I'm adding some information about Winobias that I got from the paper :smile:\r\n\r\nI think this makes it a bit clearer! ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4631\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4631\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4631","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4631","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4631.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4631.patch","merged_at":1657199506000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4630","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4630\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4630\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4630\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4630","id":1293470728,"node_id":"PR_kwDODunzps460HFM","number":4630,"title":"fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py.","user":{"login":"gugarosa","id":4120639,"node_id":"MDQ6VXNlcjQxMjA2Mzk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4120639?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gugarosa","html_url":"https:\/\/github.com\/gugarosa","followers_url":"https:\/\/api.github.com\/users\/gugarosa\/followers","following_url":"https:\/\/api.github.com\/users\/gugarosa\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gugarosa\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gugarosa\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gugarosa\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gugarosa\/orgs","repos_url":"https:\/\/api.github.com\/users\/gugarosa\/repos","events_url":"https:\/\/api.github.com\/users\/gugarosa\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gugarosa\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656959215000,"updated_at":1657034392000,"closed_at":1657033701000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4612.\r\n\r\nApparently, newest `fsspec` versions do not allow access to attribute-based modules if they are not imported, such as `fsspec.async`.\r\n\r\nThus, @mariosasko suggested to add the missing part to the module import to allow for its access.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4630\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4630\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4630","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4630","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4630.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4630.patch","merged_at":1657033701000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4629","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4629\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4629\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4629\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4629","id":1293418800,"node_id":"I_kwDODunzps5NGAEw","number":4629,"title":"Rename repo default branch to main","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":4296013012,"node_id":"LA_kwDODunzps8AAAABAA_01A","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/maintenance","name":"maintenance","color":"d4c5f9","default":false,"description":"Maintenance tasks"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1656954970000,"updated_at":1657122597000,"closed_at":1657122597000,"author_association":"MEMBER","active_lock_reason":null,"body":"Rename repository default branch to `main` (instead of current `master`).\r\n\r\nOnce renamed, users will have to manually update their local repos:\r\n\r\n- [ ] Upstream:\r\n ```\r\n git branch -m master main\r\n git fetch upstream main\r\n git branch -u upstream\/main main\r\n git remote set-head upstream -a\r\n ```\r\n\r\n- [ ] Origin:\r\nRename fork default branch as well at: https:\/\/github.com\/USERNAME\/lam\/settings\/branches\r\nThen:\r\n ```\r\n git fetch origin main\r\n git remote set-head origin -a\r\n ```\r\n\r\nCC: @sgugger","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4629\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4629\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4628","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4628\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4628\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4628\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4628","id":1293361308,"node_id":"PR_kwDODunzps46zvFJ","number":4628,"title":"Fix time type `_arrow_to_datasets_dtype` conversion","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656951615000,"updated_at":1657202918000,"closed_at":1657202232000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4620\r\n\r\nThe issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(type))` to convert them both to the `Time64Type(time64[unit])` format.\r\n\r\ncc @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4628\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4628\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4628","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4628","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4628.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4628.patch","merged_at":1657202231000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4627","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4627\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4627\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4627\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4627","id":1293287798,"node_id":"PR_kwDODunzps46zfNa","number":4627,"title":"fixed duplicate calculation of spearmanr function in metrics wrapper.","user":{"login":"benlipkin","id":38060297,"node_id":"MDQ6VXNlcjM4MDYwMjk3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38060297?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benlipkin","html_url":"https:\/\/github.com\/benlipkin","followers_url":"https:\/\/api.github.com\/users\/benlipkin\/followers","following_url":"https:\/\/api.github.com\/users\/benlipkin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benlipkin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benlipkin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benlipkin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benlipkin\/orgs","repos_url":"https:\/\/api.github.com\/users\/benlipkin\/repos","events_url":"https:\/\/api.github.com\/users\/benlipkin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benlipkin\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Great, can open a PR in `evaluate` as well to optimize this.\r\n\r\nRelatedly, I wanted to add a new metric, Kendall Tau (https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.stats.kendalltau.html). If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more sense to do that in the `datasets` or `evaluate` repo (or both)?\r\n\r\nThanks!","PR opened in`evaluate` library with same minor adjustment: https:\/\/github.com\/huggingface\/evaluate\/pull\/176 ","> If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more sense to do that in the datasets or evaluate repo (or both)?\r\n\r\nI think you could just add it to `evaluate`, we're not adding new metrics in this repo anymore"],"created_at":1656946921000,"updated_at":1657197669000,"closed_at":1657197669000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"During _compute, the scipy.stats spearmanr function was called twice, redundantly, once for calculating the score and once for calculating the p-value, under the conditional branch where return_pvalue=True. I adjusted the _compute function to execute the spearmanr function once, store the results tuple in a temporary variable, and then pass the indexed contents to the expected keys of the returned dictionary.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4627\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4627\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4627","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4627","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4627.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4627.patch","merged_at":1657197669000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4626","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4626\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4626\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4626\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4626","id":1293256269,"node_id":"I_kwDODunzps5NFYZN","number":4626,"title":"Add non-commercial licensing info for datasets for which we removed tags","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["yep plus `license_details` also makes sense for this IMO"],"created_at":1656945163000,"updated_at":1657290449000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https:\/\/github.com\/huggingface\/datasets\/pull\/4613#discussion_r911919753\r\n\r\nReason for this is that we only allow tags that are part of our [supported list of licenses](https:\/\/github.com\/huggingface\/datasets\/blob\/84fc3ad73c85de4eda5d152dfede7671491449cb\/src\/datasets\/utils\/resources\/standard_licenses.tsv)\r\n\r\nWe should update the Licensing Information section of the concerned dataset cards, now that the non-commercial tag doesn't exist anymore for certain datasets","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4626\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4626\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4625","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4625\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4625\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4625\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4625","id":1293163744,"node_id":"PR_kwDODunzps46zELz","number":4625,"title":"Unpack `dl_manager.iter_files` to allow parallization","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Cool thanks ! Yup it sounds like the right solution.\r\n\r\nIt looks like `_generate_tables` needs to be updated as well to fix the CI"],"created_at":1656940618000,"updated_at":1657019514000,"closed_at":1657018848000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Iterate over data files outside `dl_manager.iter_files` to allow parallelization in streaming mode.\r\n\r\n(The issue reported [here](https:\/\/discuss.huggingface.co\/t\/dataset-only-have-n-shard-1-when-has-multiple-shards-in-repo\/19887))\r\n\r\nPS: Another option would be to override `FilesIterable.__getitem__` to make it indexable and check for that type in `_shard_kwargs` and `n_shards,` but IMO this solution adds too much unnecessary complexity. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4625\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4625\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4625","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4625","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4625.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4625.patch","merged_at":1657018848000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4624","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4624\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4624\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4624\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4624","id":1293085058,"node_id":"PR_kwDODunzps46yzOK","number":4624,"title":"Remove all paperswithcode_id: null","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.\r\n\r\n@lhoestq maybe it's better to accept it on the Hub side then? Let me know if you want us to do it Hub-side","Yup it's maybe better to support it on the Hub side then indeed, thanks ! Closing this one"],"created_at":1656936692000,"updated_at":1656940920000,"closed_at":1656940238000,"author_association":"MEMBER","active_lock_reason":null,"body":"On the Hub there is a validation error on the `paperswithcode_id` tag when the value is `null`:\r\n\r\n\"image\"\r\n\r\nWe've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.\r\n\r\nTo have the validation working again we can simply remove all the `paperswithcode_id: null`.\r\n\r\ncc @julien-c ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4624\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4624\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4624","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4624","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4624.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4624.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4623","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4623\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4623\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4623\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4623","id":1293042894,"node_id":"I_kwDODunzps5NEkTO","number":4623,"title":"Loading MNIST as Pytorch Dataset","user":{"login":"jameschapman19","id":56592797,"node_id":"MDQ6VXNlcjU2NTkyNzk3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56592797?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jameschapman19","html_url":"https:\/\/github.com\/jameschapman19","followers_url":"https:\/\/api.github.com\/users\/jameschapman19\/followers","following_url":"https:\/\/api.github.com\/users\/jameschapman19\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jameschapman19\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jameschapman19\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jameschapman19\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jameschapman19\/orgs","repos_url":"https:\/\/api.github.com\/users\/jameschapman19\/repos","events_url":"https:\/\/api.github.com\/users\/jameschapman19\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jameschapman19\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ","So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndef transform_func(examples):\r\n examples[\"image\"] = [np.array(img) for img in examples[\"image\"]]\r\n return examples\r\ndataset = dataset.with_transform(transform_func)\r\ndataset[0]\r\n``` ","This then appears to work with pytorch dataloaders as:\r\n```\r\ndataloader=torch.utils.data.DataLoader(dataset,batch_size=1)\r\n```\r\n\r\nand tensorflow as:\r\n```\r\ndataset=dataset.to_tf_dataset(batch_size=1)\r\n```","Hi! `set_transform`\/`with_transform` is indeed the correct solution for the conversion. Improving this part of the API is one of the things I'm working on currently, so stay tuned!"],"created_at":1656934390000,"updated_at":1656945650000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nConversion of MNIST dataset to pytorch fails with bug\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndataset.set_format('torch')\r\ndataset[0]\r\nprint()\r\n```\r\n\r\n## Expected results\r\nExpect to see torch tensors image and label\r\n\r\n## Actual results\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\JetBrains\\PyCharm 2020.3.3\\plugins\\python\\helpers\\pydev\\pydevd.py\", line 1491, in _exec\r\n pydev_imports.execfile(file, globals, locals) # execute the script\r\n File \"C:\\Program Files\\JetBrains\\PyCharm 2020.3.3\\plugins\\python\\helpers\\pydev\\_pydev_imps\\_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"C:\/Users\/chapm\/PycharmProjects\/multiviewdata\/multiviewdata\/huggingface\/mnist.py\", line 13, in \r\n dataset[0]\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 2154, in __getitem__\r\n return self._getitem(\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 2139, in _getitem\r\n formatted_output = format_table(\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\formatting\\formatting.py\", line 532, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\formatting\\formatting.py\", line 281, in __call__\r\n return self.format_row(pa_table)\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\formatting\\torch_formatter.py\", line 58, in format_row\r\n return self.recursive_tensorize(row)\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\formatting\\torch_formatter.py\", line 54, in recursive_tensorize\r\n return map_nested(self._recursive_tensorize, data_struct, map_list=False)\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 356, in map_nested\r\n mapped = [\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 357, in \r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 309, in _single_map_nested\r\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 309, in \r\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 293, in _single_map_nested\r\n return function(data_struct)\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\formatting\\torch_formatter.py\", line 51, in _recursive_tensorize\r\n return self._tensorize(data_struct)\r\n File \"C:\\Users\\chapm\\PycharmProjects\\multiviewdata\\venv\\lib\\site-packages\\datasets\\formatting\\torch_formatter.py\", line 38, in _tensorize\r\n if np.issubdtype(value.dtype, np.integer):\r\nAttributeError: 'bytes' object has no attribute 'dtype'\r\npython-BaseException\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Windows-10-10.0.22579-SP0\r\n- Python version: 3.9.2\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.1\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4623\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4623\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4622","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4622\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4622\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4622\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4622","id":1293031939,"node_id":"PR_kwDODunzps46ynmT","number":4622,"title":"Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present)","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq @mariosasko pls take a look at https:\/\/github.com\/huggingface\/datasets\/pull\/4622\/commits\/769e4c046a5bd5e3a4dbd09cfad1f4cf60677869. I modified `_generate_examples()` according to the same logic too: removed checking if `metadata_files` are not empty for the case when `self.config.drop_metadata=True` because I think we should be aligned with the config and preserve labels if `self.config.drop_labels=False` (the default value) and `self.config.drop_metadata=True` but `metadata_files` are passed. This is an extremely unlikely use case (when `self.config.drop_metadata=True`, but `metadata_files` are passed to `_generate_examples()`) since users usually do not use `_generate_examples()` alone but I believe it would be consistent to have the same behavior as in `_splits_generators()`. This change requires change in tests too if we suppose that we want to preserve labels (default value of `self.config.drop_labels` is False) when `self.config.drop_metadata=True`, even if `metadata_files` are for some reason provided (as it is done in tests). \r\n\r\nwdyt about this change?\r\n","@lhoestq it wouldn't raise an error if we check `example.keys() == {\"image\", \"label\"}` as test checks only `_generate_examples`, not `encode_example`. and in the implementation of this PR `_generate_examples` would return both `image` and `label` key in the case when `drop_metadata=True` and `drop_labels=False` (default) as it seems that we agreed on that :)","and on the other hand it would raise an error if `label` column is missing in _generate_examples when `drop_metadata=True` and `drop_labels=False`\r\n\r\nby \"it\" i mean tests :D (`test_generate_examples_with_metadata_that_misses_one_image`, `test_generate_examples_with_metadata_in_wrong_location` and `test_generate_examples_drop_metadata`)","Perhaps we could make `self.config.drop_metadata = None` and `self.config.drop_labels = None` the defaults to see explicitly what the user wants. This would then turn into `self.config.drop_metadata = False` and `self.config.drop_labels = True` if metadata files are present and `self.config.drop_metadata = True` and `self.config.drop_labels = False` if not. And if the user wants to have the `label` column alongside metadata columns, it can do so by passing `drop_labels = False` explicitely (in that scenario we have to check that the `label` column is not already present in metadata files). And maybe we can also improve the logging messages.\r\n\r\nI find it problematic that the current implementation drops labels in some scenarios even if `self.config.drop_labels = False`, and the user doesn't have control over this behavior.\r\n\r\nLet me know what you think."],"created_at":1656933800000,"updated_at":1657895843000,"closed_at":1657895064000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Will fix #4621 \r\n\r\nImageFolder raises `KeyError: 'label'` with params `drop_metadata=True` and `drop_labels=False` (if there is at least one metadata.jsonl file a data directory). This happens because metadata files are collected inside `analyze()` function regardless of `drop_metadata` value. And then the following condition doesn't pass: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/packaged_modules\/imagefolder\/imagefolder.py#L167\r\n\r\nSo I suggest to double check it inside `analyze()` not to collect metadata files if they are not needed. (and labels too, to be consistent)\r\n\r\n---\r\nAlso, I added a test to check if labels are inferred correctly from directories names in general (because we didn't have it) :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4622\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4622\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4622","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4622","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4622.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4622.patch","merged_at":1657895064000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4621","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4621\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4621\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4621\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4621","id":1293030128,"node_id":"I_kwDODunzps5NEhLw","number":4621,"title":"ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"assignees":[{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1656933704000,"updated_at":1657895064000,"closed_at":1657895064000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nIf you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass features manually (when there is a tool that can infer them automatically) don't look like a good idea to me either.\r\n\r\n## Steps to reproduce the bug\r\n### Clone an example dataset from the Hub\r\n```bash\r\ngit clone https:\/\/huggingface.co\/datasets\/nateraw\/test-imagefolder-metadata\r\n```\r\n### Try to load it\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"test-imagefolder-metadata\", drop_metadata=True, drop_labels=False)\r\n```\r\nor even just\r\n```python\r\nds = load_dataset(\"test-imagefolder-metadata\", drop_metadata=True)\r\n```\r\nas `drop_labels=False` is a default value.\r\n\r\n## Expected results\r\nA DatasetDict object with two features: `\"image\"` and `\"label\"`.\r\n\r\n## Actual results\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/polina\/workspace\/datasets\/debug.py\", line 18, in \r\n ds = load_dataset(\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/load.py\", line 1732, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/builder.py\", line 1227, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/builder.py\", line 793, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/builder.py\", line 1218, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/features\/features.py\", line 1596, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/features\/features.py\", line 1165, in encode_nested_example\r\n {\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/features\/features.py\", line 1165, in \r\n {\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/utils\/py_utils.py\", line 249, in zip_dict\r\n yield key, tuple(d[key] for d in dicts)\r\n File \"\/home\/polina\/workspace\/datasets\/src\/datasets\/utils\/py_utils.py\", line 249, in \r\n yield key, tuple(d[key] for d in dicts)\r\nKeyError: 'label'\r\n```\r\n\r\n## Environment info\r\n`datasets` master branch \r\n\r\n- `datasets` version: 2.3.3.dev0\r\n- Platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.17\r\n- Python version: 3.8.12\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.4.1\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4621\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":1},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4621\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4620","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4620\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4620\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4620\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4620","id":1292797878,"node_id":"I_kwDODunzps5NDoe2","number":4620,"title":"Data type is not recognized when using datetime.time","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["cc @mariosasko ","Hi, thanks for reporting! I'm investigating the issue."],"created_at":1656922418000,"updated_at":1657202231000,"closed_at":1657202231000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nCreating a dataset from a pandas dataframe with `datetime.time` format generates an error.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport pandas as pd\r\nfrom datetime import time\r\nfrom datasets import Dataset\r\ndf = pd.DataFrame({\"feature_name\": [time(1, 1, 1)]})\r\ndataset = Dataset.from_pandas(df)\r\n```\r\n\r\n## Expected results\r\n\r\nThe dataset should be created.\r\n\r\n## Actual results\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 823, in from_pandas\r\n return cls(table, info=info, split=split)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 679, in __init__\r\n inferred_features = Features.from_arrow_schema(arrow_table.schema)\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/features\/features.py\", line 1551, in from_arrow_schema\r\n obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/features\/features.py\", line 1551, in \r\n obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/features\/features.py\", line 1315, in generate_from_arrow_type\r\n return Value(dtype=_arrow_to_datasets_dtype(pa_type))\r\n File \"\/home\/slesage\/hf\/datasets-server\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/datasets\/features\/features.py\", line 83, in _arrow_to_datasets_dtype\r\n return f\"time64[{arrow_type.unit}]\"\r\nAttributeError: 'pyarrow.lib.DataType' object has no attribute 'unit'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.3.dev0\r\n- Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31\r\n- Python version: 3.9.6\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4620\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4620\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4619","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4619\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4619\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4619\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4619","id":1292107275,"node_id":"I_kwDODunzps5NA_4L","number":4619,"title":"np arrays get turned into native lists","user":{"login":"ZhaofengWu","id":11954789,"node_id":"MDQ6VXNlcjExOTU0Nzg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11954789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZhaofengWu","html_url":"https:\/\/github.com\/ZhaofengWu","followers_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/followers","following_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/repos","events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If you add the line `dataset2.set_format('np')` before calling `dataset2[0]['tmp']` it should return `np.ndarray`.\r\nI believe internally it will not store it as a list, it is only returning a list when you index it.\r\n\r\n```\r\nIn [1]: import datasets, numpy as np\r\nIn [2]: dataset = datasets.load_dataset(\"glue\", \"mrpc\")[\"validation\"]\r\nIn [3]: dataset2 = dataset.map(lambda x: {\"tmp\": np.array([0.5])}, batched=False)\r\nIn [4]: dataset2[0][\"tmp\"]\r\nOut[4]: [0.5]\r\n\r\nIn [5]: dataset2.set_format('np')\r\n\r\nIn [6]: dataset2[0][\"tmp\"]\r\nOut[6]: array([0.5])\r\n```","I see, thanks! Any idea if the default numpy \u2192 list conversion might cause precision loss?","I'm not super familiar with our datasets works internally, but I think your `np` array will be stored in a `pyarrow` format, and then you take a view of this as a python array. In which case, I think the precision should be preserved."],"created_at":1656784497000,"updated_at":1656880027000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n>>> import datasets, numpy as np\r\n>>> dataset = datasets.load_dataset(\"glue\", \"mrpc\")[\"validation\"]\r\nReusing dataset glue (...)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 1360.61it\/s]\r\n>>> dataset2 = dataset.map(lambda x: {\"tmp\": np.array([0.5])}, batched=False)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 408\/408 [00:00<00:00, 10819.97ex\/s]\r\n>>> dataset2[0][\"tmp\"]\r\n[0.5]\r\n>>> type(dataset2[0][\"tmp\"])\r\n\r\n```\r\n\r\n## Expected results\r\n`dataset2[0][\"tmp\"]` should be an `np.ndarray`.\r\n\r\n## Actual results\r\nIt's a list.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: mac, though I'm pretty sure it happens on a linux machine too\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4619\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4619\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4618","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4618\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4618\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4618\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4618","id":1292078225,"node_id":"I_kwDODunzps5NA4yR","number":4618,"title":"contribute data loading for object detection datasets with yolo data format","user":{"login":"faizankshaikh","id":8406903,"node_id":"MDQ6VXNlcjg0MDY5MDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8406903?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/faizankshaikh","html_url":"https:\/\/github.com\/faizankshaikh","followers_url":"https:\/\/api.github.com\/users\/faizankshaikh\/followers","following_url":"https:\/\/api.github.com\/users\/faizankshaikh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/faizankshaikh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/faizankshaikh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/faizankshaikh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/faizankshaikh\/orgs","repos_url":"https:\/\/api.github.com\/users\/faizankshaikh\/repos","events_url":"https:\/\/api.github.com\/users\/faizankshaikh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/faizankshaikh\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! The `imagefolder` script is already quite complex, so a standalone script sounds better. Also, I suggest we create an org on the Hub (e.g. `hf-loaders`) and store such scripts there for easier maintenance rather than having them as packaged modules (IMO only very generic loaders should be packaged). WDYT @lhoestq @albertvillanova @polinaeterna?","@mariosasko sounds good to me!\r\n","Thank you for the suggestion @mariosasko . I agree with the point, but I have a few doubts\r\n\r\n1. How would the user access the script if it's not a part of the core codebase?\r\n2. Could you direct me as to what will be the tasks I have to do to contribute to the code? As per my understanding, it would be like\r\n 1. Create a new org \"hf-loaders\" and add you (and more HF people) to the org\r\n 2. Add data loader script as a (model?)\r\n 3. Test it with a dataset on HF hub\r\n3. We should maybe brainstorm as to which public datasets have this format (YOLO type) and are the most important ones to test the script with. We can even add the datasets on HF Hub alongside the script","1. Like this: `load_dataset(\"hf-loaders\/yolo\", data_files=...)`\r\n2. The steps would be:\r\n 1. Create a new org `hf-community-loaders` (IMO a better name than \"hf-loaders\") and add me (as an admin)\r\n 2. Create a new dataset repo `yolo` and add the loading script to it (`yolo.py`)\r\n 3. Open a discussion to request our review\r\n4. I like this idea. Another option is to add snippets that describe how to load such datasets using the `yolo` loader."],"created_at":1656775319000,"updated_at":1658412644000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nAt the moment, HF datasets loads [image classification datasets](https:\/\/huggingface.co\/docs\/datasets\/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https:\/\/huggingface.co\/datasets\/jalFaizy\/detect_chess_pieces\/discussions\/2))\r\n\r\n**Describe the solution you'd like**\r\nI wrote a [custom script](https:\/\/huggingface.co\/datasets\/jalFaizy\/detect_chess_pieces\/blob\/main\/detect_chess_pieces.py) to load dataset which has YOLO data format. \r\n\r\n**Describe alternatives you've considered**\r\nThe script can either be a standalone dataset builder, or a modified version of `ImageFolder`\r\n\r\n**Additional context**\r\nI would be happy to contribute to this, but I would do it at a very slow pace (maybe a month or two) as I have my exams approaching \ud83d\ude04 \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4618\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4618\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4615","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4615\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4615\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4615\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4615","id":1291307428,"node_id":"PR_kwDODunzps46tADt","number":4615,"title":"Fix `embed_storage` on features inside lists\/sequences","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656676328000,"updated_at":1657282390000,"closed_at":1657281696000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add a dedicated function for embed_storage to always preserve the embedded\/casted arrays (and to have more control over `embed_storage` in general).\r\n\r\nFix #4591 \r\n\r\n~~(Waiting for #4608 to be merged to mark this PR as ready for review - required for fixing `xgetsize` in private repos)~~ Done!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4615\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4615\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4615","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4615","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4615.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4615.patch","merged_at":1657281695000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4614","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4614\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4614\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4614\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4614","id":1291218020,"node_id":"PR_kwDODunzps46ssfw","number":4614,"title":"Ensure ConcatenationTable.cast uses target_schema metadata","user":{"login":"dtuit","id":8114067,"node_id":"MDQ6VXNlcjgxMTQwNjc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8114067?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dtuit","html_url":"https:\/\/github.com\/dtuit","followers_url":"https:\/\/api.github.com\/users\/dtuit\/followers","following_url":"https:\/\/api.github.com\/users\/dtuit\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dtuit\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dtuit\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dtuit\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dtuit\/orgs","repos_url":"https:\/\/api.github.com\/users\/dtuit\/repos","events_url":"https:\/\/api.github.com\/users\/dtuit\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dtuit\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, Thanks for the detailed comment. I've tested the suggested approach and can confirm it works for the testcase outlined above! The PR is updated with the changes.","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656670928000,"updated_at":1658238525000,"closed_at":1658237784000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable.\r\n\r\nCode example of where issue arrises:\r\n```\r\nfrom datasets import Dataset, Image\r\n\r\ncolumn1 = [0, 1]\r\nimage_paths = ['\/images\/image1.jpg', '\/images\/image2.jpg']\r\n\r\nds = Dataset.from_dict({\"column1\": column1})\r\nds = ds.add_column(\"image\", image_paths)\r\nds.cast_column(\"image\", Image()) # Fails here\r\n```\r\nOutput\r\n```\r\n...\r\nTypeError: Couldn't cast array of type\r\nstring\r\nto\r\n{'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4614\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4614\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4614","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4614","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4614.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4614.patch","merged_at":1658237784000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4613","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4613\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4613\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4613\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4613","id":1291181193,"node_id":"PR_kwDODunzps46skd6","number":4613,"title":"Align\/fix license metadata info","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thank you thank you! Let's merge and pray? \ud83d\ude31 ","I just need to add `license_details` to the validator and yup we can merge"],"created_at":1656669050000,"updated_at":1656680037000,"closed_at":1656679367000,"author_association":"MEMBER","active_lock_reason":null,"body":"fix bad \"other-*\" licenses and add the corresponding \"license_details\" when relevant","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4613\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4613\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4613","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4613","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4613.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4613.patch","merged_at":1656679366000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4612","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4612\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4612\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4612\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4612","id":1290984660,"node_id":"I_kwDODunzps5M8tzU","number":4612,"title":"Release 2.3.0 broke custom iterable datasets","user":{"login":"aapot","id":19529125,"node_id":"MDQ6VXNlcjE5NTI5MTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19529125?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aapot","html_url":"https:\/\/github.com\/aapot","followers_url":"https:\/\/api.github.com\/users\/aapot\/followers","following_url":"https:\/\/api.github.com\/users\/aapot\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aapot\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aapot\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aapot\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aapot\/orgs","repos_url":"https:\/\/api.github.com\/users\/aapot\/repos","events_url":"https:\/\/api.github.com\/users\/aapot\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aapot\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.","Hi! I think it's easier to replace `import fsspec` with `import fsspec.asyn` and leave the rest unchanged. @gugarosa Are you interested in submitting a PR?","Perfect, it is even better!\r\n\r\nJust submitted the PR: #4630.\r\n\r\nThank you!"],"created_at":1656657967000,"updated_at":1657033701000,"closed_at":1657033701000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nTrying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0. \r\n\r\n## Steps to reproduce the bug\r\n```python\r\nnext(iter(custom_iterable_dataset))\r\n```\r\n\r\n## Expected results\r\n`next(iter(custom_iterable_dataset))` should return examples from the dataset\r\n\r\n## Actual results\r\n```\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/formatting\/dataset_wrappers\/torch_iterable_dataset.py in _set_fsspec_for_multiprocess()\r\n 16 See https:\/\/github.com\/fsspec\/gcsfs\/issues\/379\r\n 17 \"\"\"\r\n---> 18 fsspec.asyn.iothread[0] = None\r\n 19 fsspec.asyn.loop[0] = None\r\n 20 \r\n\r\nAttributeError: module 'fsspec' has no attribute 'asyn'\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.0\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4612\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4612\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4611","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4611\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4611\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4611\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4611","id":1290940874,"node_id":"PR_kwDODunzps46rxIX","number":4611,"title":"Preserve member order by MockDownloadManager.iter_archive","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656654500000,"updated_at":1656694751000,"closed_at":1656694108000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob(\"*\")`, which migh not be the same order as in the original archive.\r\n\r\nSee issue in:\r\n- https:\/\/github.com\/huggingface\/datasets\/pull\/4579#issuecomment-1172135027\r\n\r\nThis PR fixes the order of the members yielded by `MockDownloadManager.iter_archive` so that it is the same as in the original archive.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4611\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4611\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4611","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4611","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4611.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4611.patch","merged_at":1656694108000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4610","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4610\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4610\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4610\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4610","id":1290603827,"node_id":"I_kwDODunzps5M7Q0z","number":4610,"title":"codeparrot\/github-code failing to load ","user":{"login":"PyDataBlog","id":29863388,"node_id":"MDQ6VXNlcjI5ODYzMzg4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29863388?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PyDataBlog","html_url":"https:\/\/github.com\/PyDataBlog","followers_url":"https:\/\/api.github.com\/users\/PyDataBlog\/followers","following_url":"https:\/\/api.github.com\/users\/PyDataBlog\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PyDataBlog\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PyDataBlog\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PyDataBlog\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PyDataBlog\/orgs","repos_url":"https:\/\/api.github.com\/users\/PyDataBlog\/repos","events_url":"https:\/\/api.github.com\/users\/PyDataBlog\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PyDataBlog\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I believe the issue is in `codeparrot\/github-code`. `base_path` param is missing - https:\/\/huggingface.co\/datasets\/codeparrot\/github-code\/blob\/main\/github-code.py#L169\r\n\r\nFunction definition has changed.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/0e1c629cfb9f9ba124537ba294a0ec451584da5f\/src\/datasets\/data_files.py#L547\r\n\r\n@mariosasko could you please confirm my finding? And are there any changes that need to be done from my side?","Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it","> Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it\n\nI can't wait for that releasee. Broke my application","This simple workaround should fix: https:\/\/huggingface.co\/datasets\/codeparrot\/github-code\/discussions\/2\r\n\r\n`get_patterns_in_dataset_repository` can treat whether `base_path=None`, so we just need to make sure that codeparrot\/github-code `_split_generators` calls with such an argument.","I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ? \r\n@lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?","Actually I think it's just simpler to fix it in the dataset itself, let me open a PR\r\n\r\nEDIT: PR opened here: https:\/\/huggingface.co\/datasets\/codeparrot\/github-code\/discussions\/3","PR is merged, it's working now ! Closing this one :)","> I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ?\r\n> @lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?\r\n\r\nYou are definitely right, sorry about it. I always keep forgetting that we need to keep in mind users from past versions, my bad."],"created_at":1656620688000,"updated_at":1657031053000,"closed_at":1657012796000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\ncodeparrot\/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'`\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n```\r\n\r\n## Expected results\r\nloaded dataset object\r\n\r\n## Actual results\r\n```python\r\n [3]: dataset = load_dataset(\"codeparrot\/github-code\")\r\nNo config specified, defaulting to: github-code\/all-all\r\nDownloading and preparing dataset github-code\/all-all to \/home\/bebr\/.cache\/huggingface\/datasets\/codeparrot___github-code\/all-all\/0.0.0\/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817...\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nInput In [3], in ()\r\n----> 1 dataset = load_dataset(\"codeparrot\/github-code\")\r\n\r\nFile ~\/miniconda3\/envs\/fastapi-kube\/lib\/python3.10\/site-packages\/datasets\/load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1678 # Download and prepare data\r\n-> 1679 builder_instance.download_and_prepare(\r\n 1680 download_config=download_config,\r\n 1681 download_mode=download_mode,\r\n 1682 ignore_verifications=ignore_verifications,\r\n 1683 try_from_hf_gcs=try_from_hf_gcs,\r\n 1684 use_auth_token=use_auth_token,\r\n 1685 )\r\n 1687 # Build dataset for splits\r\n 1688 keep_in_memory = (\r\n 1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1690 )\r\n\r\nFile ~\/miniconda3\/envs\/fastapi-kube\/lib\/python3.10\/site-packages\/datasets\/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 702 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 703 if not downloaded_from_gcs:\r\n--> 704 self._download_and_prepare(\r\n 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 706 )\r\n 707 # Sync info\r\n 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile ~\/miniconda3\/envs\/fastapi-kube\/lib\/python3.10\/site-packages\/datasets\/builder.py:1221, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)\r\n 1220 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n\r\nFile ~\/miniconda3\/envs\/fastapi-kube\/lib\/python3.10\/site-packages\/datasets\/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 769 split_dict = SplitDict(dataset_name=self.name)\r\n 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 773 # Checksums verification\r\n 774 if verify_infos and dl_manager.record_checksums:\r\n\r\nFile ~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/codeparrot--github-code\/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817\/github-code.py:169, in GithubCode._split_generators(self, dl_manager)\r\n 162 def _split_generators(self, dl_manager):\r\n 164 hfh_dataset_info = HfApi(datasets.config.HF_ENDPOINT).dataset_info(\r\n 165 _REPO_NAME,\r\n 166 timeout=100.0,\r\n 167 )\r\n--> 169 patterns = datasets.data_files.get_patterns_in_dataset_repository(hfh_dataset_info)\r\n 170 data_files = datasets.data_files.DataFilesDict.from_hf_repo(\r\n 171 patterns,\r\n 172 dataset_info=hfh_dataset_info,\r\n 173 )\r\n 175 files = dl_manager.download_and_extract(data_files[\"train\"])\r\n\r\nTypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.35\r\n- Python version: 3.10.5\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4610\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4610\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4609","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4609\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4609\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4609\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4609","id":1290392083,"node_id":"I_kwDODunzps5M6dIT","number":4609,"title":"librispeech dataset has to download whole subset when specifing the split to use","user":{"login":"sunhaozhepy","id":73462159,"node_id":"MDQ6VXNlcjczNDYyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73462159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sunhaozhepy","html_url":"https:\/\/github.com\/sunhaozhepy","followers_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/followers","following_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/orgs","repos_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/repos","events_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sunhaozhepy\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! You can use streaming to fetch only a subset of the data:\r\n```python\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\", streaming=True)\r\n```\r\nAlso, we plan to make it possible to download a particular split in the non-streaming mode, but this task is not easy due to how our dataset scripts are structured.","Hi,\r\n\r\nThat's a great help. Thank you very much."],"created_at":1656607104000,"updated_at":1657662272000,"closed_at":1657662272000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nlibrispeech dataset has to download whole subset when specifing the split to use\r\n\r\n## Steps to reproduce the bug\r\nsee below\r\n# Sample code to reproduce the bug\r\n```\r\n!pip install datasets\r\nfrom datasets import load_dataset\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\")\r\n```\r\n\r\n## Expected results\r\nThe split \"train.clean.100\" is downloaded.\r\n\r\n## Actual results\r\nAll four splits in \"clean\" subset is downloaded.\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4609\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4609\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4608","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4608\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4608\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4608\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4608","id":1290298002,"node_id":"PR_kwDODunzps46pm9A","number":4608,"title":"Fix xisfile, xgetsize, xisdir, xlistdir in private repo","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Added tests for xisfile, xgetsize, xlistdir and xglob for private repos, and also tests for xwalk that was untested"],"created_at":1656602601000,"updated_at":1657111559000,"closed_at":1657110859000,"author_association":"MEMBER","active_lock_reason":null,"body":"`xisfile` is working in a private repository when passing a chained URL to a file inside an archive, e.g. `zip:\/\/a.txt::https:\/\/huggingface\/datasets\/username\/dataset_name\/resolve\/main\/data.zip`. However it's not working when passing a simple file `https:\/\/huggingface\/datasets\/username\/dataset_name\/resolve\/main\/data.zip`.\r\n\r\nThis is because the authentication headers are not passed correctly in this case.\r\n\r\nThis is causing dataset streaming to fail in private parquet repositories, as noted in https:\/\/github.com\/huggingface\/datasets\/issues\/4605\r\n\r\nI fixed `xisfile` and the other functions that behave the same way: xgetsize, xisdir and xlistdir\r\n\r\nTODO:\r\n- [x] tests\r\n\r\nfix https:\/\/github.com\/huggingface\/datasets\/issues\/4605","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4608\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4608\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4608","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4608","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4608.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4608.patch","merged_at":1657110859000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4607","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4607\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4607\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4607\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4607","id":1290171941,"node_id":"PR_kwDODunzps46pLnd","number":4607,"title":"Align more metadata with other repo types (models,spaces)","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I just set a default value (None) for the deprecated licenses and languages fields, which should fix most of the CI failures.\r\n\r\nNote that the CI should still be red because you edited many dataset cards and they're still missing some content - but this is unrelated to this PR so we can ignore these failures","thanks so much @lhoestq !!","There's also a follow-up PR to this one, in #4613 \u2013 I would suggest to merge all of them at the same time and hope not too many things are broken \ud83d\ude40 \ud83d\ude40 ","Alright merging this one now, let's see how broken things get"],"created_at":1656597132000,"updated_at":1656676837000,"closed_at":1656676154000,"author_association":"MEMBER","active_lock_reason":null,"body":"see also associated PR on the `datasets-tagging` Space: https:\/\/huggingface.co\/spaces\/huggingface\/datasets-tagging\/discussions\/2 (to merge after this one is merged)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4607\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4607\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4607","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4607","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4607.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4607.patch","merged_at":1656676154000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4606","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4606\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4606\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4606\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4606","id":1290083534,"node_id":"I_kwDODunzps5M5RzO","number":4606,"title":"evaluation result changes after `datasets` version change","user":{"login":"thnkinbtfly","id":70014488,"node_id":"MDQ6VXNlcjcwMDE0NDg4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/70014488?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thnkinbtfly","html_url":"https:\/\/github.com\/thnkinbtfly","followers_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/followers","following_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/orgs","repos_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/repos","events_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thnkinbtfly\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! The GH\/no-namespace datasets versioning is synced with the version of the `datasets` lib, which means that the `wikiann` script was modified between the two compared versions. In this scenario, you can ensure reproducibility by pinning the script version, which is done by passing `revision=\"x.y.z\"` (e.g. `revision=\"2.2.0\"`) to `load_dataset.`\r\n"],"created_at":1656593006000,"updated_at":1656956852000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nevaluation result changes after `datasets` version change\r\n\r\n## Steps to reproduce the bug\r\n1. Train a model on WikiAnn\r\n2. reload the ckpt -> test accuracy becomes same as eval accuracy\r\n3. such behavior is gone after downgrading `datasets`\r\n\r\nhttps:\/\/colab.research.google.com\/drive\/1kYz7-aZRGdayaq-gDTt30tyEgsKlpYOw?usp=sharing\r\n\r\n## Expected results\r\nevaluation result shouldn't change before\/after `datasets` version changes\r\n\r\n## Actual results\r\nevaluation result changes before\/after `datasets` version changes\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: colab\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n\r\nQ. How could the evaluation result change before\/after `datasets` version changes?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4606\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4606\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4605","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4605\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4605\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4605\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4605","id":1290058970,"node_id":"I_kwDODunzps5M5Lza","number":4605,"title":"Dataset Viewer issue for boris\/gis_filtered","user":{"login":"WaterKnight1998","id":41203448,"node_id":"MDQ6VXNlcjQxMjAzNDQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41203448?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/WaterKnight1998","html_url":"https:\/\/github.com\/WaterKnight1998","followers_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/followers","following_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/orgs","repos_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/repos","events_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/WaterKnight1998\/received_events","type":"User","site_admin":false},"labels":[{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Yes, this dataset is \"gated\": you first have to go to https:\/\/huggingface.co\/datasets\/boris\/gis_filtered and click \"Access repository\" (if you accept to share your contact information with the repository authors).","I already did that, it returns error when using streaming","Oh, sorry, I misread. Looking at it. Maybe @huggingface\/datasets or @SBrandeis ","I could reproduce the error, even though I provided my token and accepted the gate form. It looks like an error from `datasets`","This is indeed a bug in `datasets`. Parquet datasets in gated\/private repositories can't be streamed properly, which caused the viewer to fail. I opened a PR at https:\/\/github.com\/huggingface\/datasets\/pull\/4608"],"created_at":1656591814000,"updated_at":1657110859000,"closed_at":1657110859000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\r\n\r\nhttps:\/\/huggingface.co\/datasets\/boris\/gis_filtered\/viewer\/boris--gis_filtered\/train\r\n\r\n### Description\r\n\r\nWhen I try to access this from the website I get this error:\r\n\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 401, message='Unauthorized', url=URL('https:\/\/huggingface.co\/datasets\/boris\/gis_filtered\/resolve\/80b805053ce61d4eb487b6b8d9095d775c2c466e\/data\/train\/0000.parquet')\r\n\r\nIf I try to load with code I also get the same issue: \r\n```python\r\ndataset2_train=load_dataset(\"boris\/gis_filtered\", use_auth_token=os.environ[\"HF_TOKEN\"],split=\"train\",streaming=True)\r\ndataset2_validation=load_dataset(\"boris\/gis_filtered\", use_auth_token=os.environ[\"HF_TOKEN\"], split=\"validation\",streaming=True)\r\n```\r\n\r\n### Owner\r\n\r\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4605\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4605\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4604","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4604\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4604\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4604\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4604","id":1289963962,"node_id":"PR_kwDODunzps46oeju","number":4604,"title":"Update CI Windows orb","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656586831000,"updated_at":1656595991000,"closed_at":1656595346000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR tries to fix recurrent random CI failures on Windows.\r\n\r\nAfter 2 runs, it seems to have fixed the issue.\r\n\r\nFix #4603.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4604\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4604\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4604","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4604","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4604.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4604.patch","merged_at":1656595345000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4603","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4603\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4603\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4603\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4603","id":1289963331,"node_id":"I_kwDODunzps5M40dD","number":4603,"title":"CI fails recurrently and randomly on Windows","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1656586798000,"updated_at":1656595345000,"closed_at":1656595345000,"author_association":"MEMBER","active_lock_reason":null,"body":"As reported by @lhoestq,\r\n\r\nThe windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.\r\nIn particular it seems that building the wheels fail. Here is an example of logs:\r\n\r\n```\r\nBuilding wheel for seqeval (setup.py): started\r\n Running command 'C:\\tools\\miniconda3\\envs\\py37\\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\circleci\\\\AppData\\\\Local\\\\Temp\\\\pip-install-h55pfgbv\\\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\circleci\\\\AppData\\\\Local\\\\Temp\\\\pip-install-h55pfgbv\\\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\\\setup.py'\"'\"';f = getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__) if os.path.exists(__file__) else io.StringIO('\"'\"'from setuptools import setup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' bdist_wheel -d 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-wheel-x3cc8ym6'\r\n No parent package detected, impossible to derive `name`\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n package init file 'seqeval\\__init__.py' not found (or not a regular file)\r\n package init file 'seqeval\\metrics\\__init__.py' not found (or not a regular file)\r\n C:\\tools\\miniconda3\\envs\\py37\\lib\\site-packages\\setuptools\\command\\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.\r\n setuptools.SetuptoolsDeprecationWarning,\r\n installing to build\\bdist.win-amd64\\wheel\r\n running install\r\n running install_lib\r\n warning: install_lib: 'build\\lib' does not exist -- no Python modules to install\r\n\r\n running install_egg_info\r\n running egg_info\r\n creating UNKNOWN.egg-info\r\n writing UNKNOWN.egg-info\\PKG-INFO\r\n writing dependency_links to UNKNOWN.egg-info\\dependency_links.txt\r\n writing top-level names to UNKNOWN.egg-info\\top_level.txt\r\n writing manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n reading manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n writing manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n Copying UNKNOWN.egg-info to build\\bdist.win-amd64\\wheel\\.\\UNKNOWN-0.0.0-py3.7.egg-info\r\n running install_scripts\r\n creating build\\bdist.win-amd64\\wheel\\UNKNOWN-0.0.0.dist-info\\WHEEL\r\n creating 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-wheel-x3cc8ym6\\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\\bdist.win-amd64\\wheel' to it\r\n adding 'UNKNOWN-0.0.0.dist-info\/METADATA'\r\n adding 'UNKNOWN-0.0.0.dist-info\/WHEEL'\r\n adding 'UNKNOWN-0.0.0.dist-info\/top_level.txt'\r\n adding 'UNKNOWN-0.0.0.dist-info\/RECORD'\r\n removing build\\bdist.win-amd64\\wheel\r\n Building wheel for seqeval (setup.py): finished with status 'done'\r\n Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1\r\n Stored in directory: c:\\users\\circleci\\appdata\\local\\pip\\cache\\wheels\\05\\96\\ee\\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7\r\n WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4603\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4603\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4602","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4602\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4602\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4602\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4602","id":1289950379,"node_id":"PR_kwDODunzps46obqi","number":4602,"title":"Upgrade setuptools in windows CI","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656586121000,"updated_at":1656593858000,"closed_at":1656593177000,"author_association":"MEMBER","active_lock_reason":null,"body":"The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.\r\nIn particular it seems that building the wheels fail. Here is an example of logs\r\n\r\n```\r\nBuilding wheel for seqeval (setup.py): started\r\n Running command 'C:\\tools\\miniconda3\\envs\\py37\\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\circleci\\\\AppData\\\\Local\\\\Temp\\\\pip-install-h55pfgbv\\\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\circleci\\\\AppData\\\\Local\\\\Temp\\\\pip-install-h55pfgbv\\\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\\\setup.py'\"'\"';f = getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__) if os.path.exists(__file__) else io.StringIO('\"'\"'from setuptools import setup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' bdist_wheel -d 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-wheel-x3cc8ym6'\r\n No parent package detected, impossible to derive `name`\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n package init file 'seqeval\\__init__.py' not found (or not a regular file)\r\n package init file 'seqeval\\metrics\\__init__.py' not found (or not a regular file)\r\n C:\\tools\\miniconda3\\envs\\py37\\lib\\site-packages\\setuptools\\command\\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.\r\n setuptools.SetuptoolsDeprecationWarning,\r\n installing to build\\bdist.win-amd64\\wheel\r\n running install\r\n running install_lib\r\n warning: install_lib: 'build\\lib' does not exist -- no Python modules to install\r\n\r\n running install_egg_info\r\n running egg_info\r\n creating UNKNOWN.egg-info\r\n writing UNKNOWN.egg-info\\PKG-INFO\r\n writing dependency_links to UNKNOWN.egg-info\\dependency_links.txt\r\n writing top-level names to UNKNOWN.egg-info\\top_level.txt\r\n writing manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n reading manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n writing manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n Copying UNKNOWN.egg-info to build\\bdist.win-amd64\\wheel\\.\\UNKNOWN-0.0.0-py3.7.egg-info\r\n running install_scripts\r\n creating build\\bdist.win-amd64\\wheel\\UNKNOWN-0.0.0.dist-info\\WHEEL\r\n creating 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-wheel-x3cc8ym6\\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\\bdist.win-amd64\\wheel' to it\r\n adding 'UNKNOWN-0.0.0.dist-info\/METADATA'\r\n adding 'UNKNOWN-0.0.0.dist-info\/WHEEL'\r\n adding 'UNKNOWN-0.0.0.dist-info\/top_level.txt'\r\n adding 'UNKNOWN-0.0.0.dist-info\/RECORD'\r\n removing build\\bdist.win-amd64\\wheel\r\n Building wheel for seqeval (setup.py): finished with status 'done'\r\n Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1\r\n Stored in directory: c:\\users\\circleci\\appdata\\local\\pip\\cache\\wheels\\05\\96\\ee\\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7\r\n WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'\r\n```\r\n\r\nhopefully this fixes the issue\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4602\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4602\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4602","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4602","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4602.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4602.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4601","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4601\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4601\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4601\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4601","id":1289924715,"node_id":"PR_kwDODunzps46oWF8","number":4601,"title":"Upgrade pip in WIN CI","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","It failed terribly"],"created_at":1656584742000,"updated_at":1656586465000,"closed_at":1656585818000,"author_association":"MEMBER","active_lock_reason":null,"body":"The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.\r\nIn particular it seems that building the wheels fail. Here is an example of logs\r\n\r\n```\r\nBuilding wheel for seqeval (setup.py): started\r\n Running command 'C:\\tools\\miniconda3\\envs\\py37\\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\circleci\\\\AppData\\\\Local\\\\Temp\\\\pip-install-h55pfgbv\\\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\circleci\\\\AppData\\\\Local\\\\Temp\\\\pip-install-h55pfgbv\\\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\\\setup.py'\"'\"';f = getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__) if os.path.exists(__file__) else io.StringIO('\"'\"'from setuptools import setup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' bdist_wheel -d 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-wheel-x3cc8ym6'\r\n No parent package detected, impossible to derive `name`\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n package init file 'seqeval\\__init__.py' not found (or not a regular file)\r\n package init file 'seqeval\\metrics\\__init__.py' not found (or not a regular file)\r\n C:\\tools\\miniconda3\\envs\\py37\\lib\\site-packages\\setuptools\\command\\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.\r\n setuptools.SetuptoolsDeprecationWarning,\r\n installing to build\\bdist.win-amd64\\wheel\r\n running install\r\n running install_lib\r\n warning: install_lib: 'build\\lib' does not exist -- no Python modules to install\r\n\r\n running install_egg_info\r\n running egg_info\r\n creating UNKNOWN.egg-info\r\n writing UNKNOWN.egg-info\\PKG-INFO\r\n writing dependency_links to UNKNOWN.egg-info\\dependency_links.txt\r\n writing top-level names to UNKNOWN.egg-info\\top_level.txt\r\n writing manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n reading manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n writing manifest file 'UNKNOWN.egg-info\\SOURCES.txt'\r\n Copying UNKNOWN.egg-info to build\\bdist.win-amd64\\wheel\\.\\UNKNOWN-0.0.0-py3.7.egg-info\r\n running install_scripts\r\n creating build\\bdist.win-amd64\\wheel\\UNKNOWN-0.0.0.dist-info\\WHEEL\r\n creating 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-wheel-x3cc8ym6\\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\\bdist.win-amd64\\wheel' to it\r\n adding 'UNKNOWN-0.0.0.dist-info\/METADATA'\r\n adding 'UNKNOWN-0.0.0.dist-info\/WHEEL'\r\n adding 'UNKNOWN-0.0.0.dist-info\/top_level.txt'\r\n adding 'UNKNOWN-0.0.0.dist-info\/RECORD'\r\n removing build\\bdist.win-amd64\\wheel\r\n Building wheel for seqeval (setup.py): finished with status 'done'\r\n Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1\r\n Stored in directory: c:\\users\\circleci\\appdata\\local\\pip\\cache\\wheels\\05\\96\\ee\\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7\r\n WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'\r\n```\r\n\r\nI tried to update pip and re-run the CI several times and I couldn't re-experience this issue for now, so I think upgrading pip may solve the issue","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4601\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4601\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4601","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4601","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4601.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4601.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4600","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4600\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4600\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4600\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4600","id":1289177042,"node_id":"PR_kwDODunzps46l3P1","number":4600,"title":"Remove multiple config section","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656529761000,"updated_at":1656956480000,"closed_at":1656955781000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https:\/\/huggingface.slack.com\/archives\/C034N0A7H09\/p1656107063801969) for more details :) ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4600\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4600\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4600","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4600","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4600.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4600.patch","merged_at":1656955781000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4599","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4599\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4599\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4599\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4599","id":1288849933,"node_id":"PR_kwDODunzps46kvfC","number":4599,"title":"Smooth-BLEU bug fixed","user":{"login":"Aktsvigun","id":36672861,"node_id":"MDQ6VXNlcjM2NjcyODYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36672861?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Aktsvigun","html_url":"https:\/\/github.com\/Aktsvigun","followers_url":"https:\/\/api.github.com\/users\/Aktsvigun\/followers","following_url":"https:\/\/api.github.com\/users\/Aktsvigun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Aktsvigun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Aktsvigun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Aktsvigun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Aktsvigun\/orgs","repos_url":"https:\/\/api.github.com\/users\/Aktsvigun\/repos","events_url":"https:\/\/api.github.com\/users\/Aktsvigun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Aktsvigun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1656514302000,"updated_at":1657213482000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"Hi,\r\n\r\nthe current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image). \r\n\r\nThis however contradicts the source paper suggesting the smooth-BLEU _(Chin-Yew Lin, Franz Josef Och. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. COLING 2004.)_ :\r\n\r\n> Add one count to the n-gram hit and total ngram count for n > 1. Therefore, for candidate translations with less than n words, they can still get a positive smoothed BLEU score from shorter n-gram matches; however if nothing matches then they will get zero scores. \r\n\r\nThis pull request aims at fixing this bug.\r\n\r\nI made a pull request in the target repository `tensorflow\/nmt`, which implements this script, yet the last commit there is dating 19.02.2019 and I doubt whether this will be fixed promptly. Yet, this bug is critical, for instance for summarization datasets with short summaries (e.g. AESLC), since smoothing needs to be applied there. Therefore, the easiest solution that I found is to fork the repo and download this script directly from the forked fixed repo.\r\n\r\nKind,\r\nAkim Tsvigun\r\n\r\n\"\u0421\u043d\u0438\u043c\u043e\u043a\r\n ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4599\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4599\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4599","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4599","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4599.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4599.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4598","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4598\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4598\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4598\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4598","id":1288774514,"node_id":"PR_kwDODunzps46kfOS","number":4598,"title":"Host financial_phrasebank data on the Hub","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656511171000,"updated_at":1656668474000,"closed_at":1656667776000,"author_association":"MEMBER","active_lock_reason":null,"body":"\r\n\r\nFix #4597.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4598\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4598\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4598","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4598","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4598.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4598.patch","merged_at":1656667776000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4597","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4597\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4597\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4597\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4597","id":1288672007,"node_id":"I_kwDODunzps5Mz5MH","number":4597,"title":"Streaming issue for financial_phrasebank","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":4069435429,"node_id":"LA_kwDODunzps7yjqgl","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/hosted-on-google-drive","name":"hosted-on-google-drive","color":"8B51EF","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["cc @huggingface\/datasets: it seems like https:\/\/www.researchgate.net\/ is flaky for datasets hosting (I put the \"hosted-on-google-drive\" tag since it's the same kind of issue I think)","Let's see if their license allows hosting their data on the Hub.","License is Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0).\r\n\r\nWe can host their data on the Hub."],"created_at":1656506743000,"updated_at":1656667776000,"closed_at":1656667776000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/financial_phrasebank\/viewer\/sentences_allagree\/train\n\n### Description\n\nAs reported by a community member using [AutoTrain Evaluate](https:\/\/huggingface.co\/spaces\/autoevaluate\/model-evaluator\/discussions\/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dataset:\r\n\r\n```\r\nServer error\r\nStatus code: 400\r\nException: Exception\r\nMessage: Give up after 5 attempts with ConnectionError\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4597\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4597\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4596","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4596\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4596\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4596\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4596","id":1288381735,"node_id":"I_kwDODunzps5MyyUn","number":4596,"title":"Dataset Viewer issue for universal_dependencies","user":{"login":"Jordy-VL","id":16034009,"node_id":"MDQ6VXNlcjE2MDM0MDA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16034009?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Jordy-VL","html_url":"https:\/\/github.com\/Jordy-VL","followers_url":"https:\/\/api.github.com\/users\/Jordy-VL\/followers","following_url":"https:\/\/api.github.com\/users\/Jordy-VL\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Jordy-VL\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Jordy-VL\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Jordy-VL\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Jordy-VL\/orgs","repos_url":"https:\/\/api.github.com\/users\/Jordy-VL\/repos","events_url":"https:\/\/api.github.com\/users\/Jordy-VL\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Jordy-VL\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks, looking at it!","Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps:\/\/huggingface.co\/datasets\/universal_dependencies\/viewer\/aqz_tudet\/train\r\n\r\n\"Capture\r\n"],"created_at":1656492629000,"updated_at":1662550168000,"closed_at":1662550167000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/universal_dependencies\n\n### Description\n\ninvalid json response body at https:\/\/datasets-server.huggingface.co\/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4596\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4596\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4595","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4595\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4595\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4595\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4595","id":1288275976,"node_id":"I_kwDODunzps5MyYgI","number":4595,"title":"Dataset Viewer issue with False positive PII redaction","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n\"Capture\r\n\r\n Maybe open a PR: https:\/\/huggingface.co\/datasets\/cakiki\/rosetta-code\/discussions\r\n","This was indeed a scraping issue which I assumed was a display issue; sorry about that!"],"created_at":1656486957000,"updated_at":1656491381000,"closed_at":1656491269000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/cakiki\/rosetta-code\n\n### Description\n\nHello, I just noticed an entry being redacted that shouldn't have been:\r\n\r\n`RootMeanSquare@Range[10]` is being displayed as `[email protected][10]`\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4595\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4595\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4594","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4594\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4594\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4594\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4594","id":1288070023,"node_id":"I_kwDODunzps5MxmOH","number":4594,"title":"load_from_disk suggests incorrect fix when used to load DatasetDict","user":{"login":"dvsth","id":11157811,"node_id":"MDQ6VXNlcjExMTU3ODEx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11157811?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dvsth","html_url":"https:\/\/github.com\/dvsth","followers_url":"https:\/\/api.github.com\/users\/dvsth\/followers","following_url":"https:\/\/api.github.com\/users\/dvsth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dvsth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dvsth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dvsth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dvsth\/orgs","repos_url":"https:\/\/api.github.com\/users\/dvsth\/repos","events_url":"https:\/\/api.github.com\/users\/dvsth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dvsth\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1656466801000,"updated_at":1656475424000,"closed_at":1656475424000,"author_association":"NONE","active_lock_reason":null,"body":"Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indicating that?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4594\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4594\/timeline","performed_via_github_app":null,"state_reason":"not_planned","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4593","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4593\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4593\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4593\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4593","id":1288067699,"node_id":"PR_kwDODunzps46iIkn","number":4593,"title":"Fix error message when using load_from_disk to load DatasetDict","user":{"login":"dvsth","id":11157811,"node_id":"MDQ6VXNlcjExMTU3ODEx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11157811?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dvsth","html_url":"https:\/\/github.com\/dvsth","followers_url":"https:\/\/api.github.com\/users\/dvsth\/followers","following_url":"https:\/\/api.github.com\/users\/dvsth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dvsth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dvsth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dvsth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dvsth\/orgs","repos_url":"https:\/\/api.github.com\/users\/dvsth\/repos","events_url":"https:\/\/api.github.com\/users\/dvsth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dvsth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1656466467000,"updated_at":1656475319000,"closed_at":1656475299000,"author_association":"NONE","active_lock_reason":null,"body":"Issue #4594 \r\nIssue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error. \r\nFix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`.\r\nChanges: Change the suggestion to say \"Please use `datasets.dataset_dict.load_from_disk` instead.\"","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4593\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4593\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4593","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4593","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4593.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4593.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4592","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4592\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4592\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4592\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4592","id":1288029377,"node_id":"I_kwDODunzps5MxcTB","number":4592,"title":"Issue with jalFaizy\/detect_chess_pieces when running datasets-cli test","user":{"login":"faizankshaikh","id":8406903,"node_id":"MDQ6VXNlcjg0MDY5MDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8406903?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/faizankshaikh","html_url":"https:\/\/github.com\/faizankshaikh","followers_url":"https:\/\/api.github.com\/users\/faizankshaikh\/followers","following_url":"https:\/\/api.github.com\/users\/faizankshaikh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/faizankshaikh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/faizankshaikh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/faizankshaikh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/faizankshaikh\/orgs","repos_url":"https:\/\/api.github.com\/users\/faizankshaikh\/repos","events_url":"https:\/\/api.github.com\/users\/faizankshaikh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/faizankshaikh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @faizankshaikh\r\n\r\nPlease note that we have recently launched the Community feature, specifically targeted to create Discussions (about issues\/questions\/asking-for-help) on each Dataset on the Hub:\r\n- Blog post: https:\/\/huggingface.co\/blog\/community-update\r\n- Docs: https:\/\/huggingface.co\/docs\/hub\/repositories-pull-requests-discussions\r\n\r\nThe Discussion tab for your \"jalFaizy\/detect_chess_pieces\" dataset is here: https:\/\/huggingface.co\/datasets\/jalFaizy\/detect_chess_pieces\/discussions\r\nYou can use it to ask for help by pinging the Datasets maintainers: see our docs here: https:\/\/huggingface.co\/docs\/datasets\/master\/en\/share#ask-for-a-help-and-reviews\r\n\r\nI'm transferring this discussion to your Discussion tab and trying to address it: https:\/\/huggingface.co\/datasets\/jalFaizy\/detect_chess_pieces\/discussions\/1","Thank you @albertvillanova , I will keep that in mind.\r\n\r\nJust a quick note - I posted the issue on Github because the dataset viewer suggested me to \"open an issue for direct support\". Maybe it can be updated with your suggestion\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/8406903\/176397633-7b077d81-2044-4487-b58e-6346b05be5cf.png)\r\n\r\n\r\n","Thank you pointing this out: yes, definitely, we should fix the error message. We are working on this."],"created_at":1656461754000,"updated_at":1656498603000,"closed_at":1656488967000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/jalFaizy\/detect_chess_pieces\n\n### Description\n\nI am trying to write a appropriate data loader for [a custom dataset](https:\/\/huggingface.co\/datasets\/jalFaizy\/detect_chess_pieces) using [this script](https:\/\/huggingface.co\/datasets\/jalFaizy\/detect_chess_pieces\/blob\/main\/detect_chess_pieces.py)\r\n\r\nWhen I run the command\r\n\r\n`$ datasets-cli test \"D:\\workspace\\HF\\detect_chess_pieces\" --save_infos --all_configs`\r\n\r\nIt gives the following error\r\n\r\n```\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\faiza\\anaconda3\\Scripts\\datasets-cli.exe\\__main__.py\", line 7, in \r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\commands\\datasets_cli.py\", line 39, in main\r\n service.run()\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\commands\\test.py\", line 132, in run\r\n for j, builder in enumerate(get_builders()):\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\commands\\test.py\", line 125, in get_builders\r\n yield builder_cls(\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\builder.py\", line 1148, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\builder.py\", line 306, in __init__\r\n info = self.get_exported_dataset_info()\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\builder.py\", line 405, in get_exported_dataset_info\r\n return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo())\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\builder.py\", line 390, in get_all_exported_dataset_infos\r\n return DatasetInfosDict.from_directory(cls.get_imported_module_dir())\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\info.py\", line 309, in from_directory\r\n dataset_infos_dict = {\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\info.py\", line 310, in \r\n config_name: DatasetInfo.from_dict(dataset_info_dict)\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\info.py\", line 272, in from_dict\r\n return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names})\r\n File \"\", line 20, in __init__\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\info.py\", line 160, in __post_init__\r\n templates = [\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\info.py\", line 161, in \r\n template if isinstance(template, TaskTemplate) else task_template_from_dict(template)\r\n File \"c:\\users\\faiza\\anaconda3\\lib\\site-packages\\datasets\\tasks\\__init__.py\", line 43, in task_template_from_dict\r\n return template.from_dict(task_template_dict)\r\nAttributeError: 'NoneType' object has no attribute 'from_dict'\r\n```\r\n\r\n\r\nMy assumption is that there is some kind of issue in how the \"task_templates\" are read, because even if I keep them as None, or not include the argument at all, the same error occurs\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4592\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4592\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4591","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4591\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4591\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4591\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4591","id":1288021332,"node_id":"I_kwDODunzps5MxaVU","number":4591,"title":"Can't push Images to hub with manual Dataset","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi, thanks for reporting! This issue stems from the changes introduced in https:\/\/github.com\/huggingface\/datasets\/pull\/4282 (cc @lhoestq), in which list casts are ignored if they don't change the list type (required to preserve `null` values). And `push_to_hub` does a special cast to embed external image files but doesn't change the types, hence the failure."],"created_at":1656460883000,"updated_at":1657281696000,"closed_at":1657281695000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nIf I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed, \r\ninstead it looks for image where image local path is\/used to be.\r\nThis doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is complicated.\r\n\r\nThis happens even though the dataset is looking like decoded images:\r\n![image](https:\/\/user-images.githubusercontent.com\/15624271\/176322689-2cc819cf-9d5c-4a8f-9f3d-83ae8ec06f20.png)\r\nand I use `embed_external_files=True` while `push_to_hub` (same with false)\r\n## Steps to reproduce the bug\r\n```python\r\n\r\nfrom PIL import Image\r\nfrom datasets import Image as ImageFeature\r\nfrom datasets import Features,Dataset\r\n#manually create dataset\r\nfeats=Features(\r\n {\r\n \"images\": [ImageFeature()], #same even if explicitly ImageFeature(decode=True)\r\n \"input_image\": ImageFeature(),\r\n }\r\n)\r\n\r\ntest_data={\"images\":[[Image.open(\"test.jpg\"),Image.open(\"test.jpg\"),Image.open(\"test.jpg\")]], \"input_image\":[Image.open(\"test.jpg\")]}\r\ntest_dataset=Dataset.from_dict(test_data,features=feats)\r\nprint(test_dataset)\r\n\r\ntest_dataset.push_to_hub(\"ceyda\/image_test_public\",private=False,token=\"\",embed_external_files=True)\r\n\r\n# clear cache rm -r ~\/.cache\/huggingface\r\n# remove \"test.jpg\" # remove to see that it is looking for image on the local path\r\n\r\ntest_dataset=load_dataset(\"ceyda\/image_test_public\",use_auth_token=\"\")\r\nprint(test_dataset)\r\nprint(test_dataset['train'][0])\r\n```\r\n\r\n## Expected results\r\nshould be able to push image bytes if dataset has `Image(decode=True)`\r\n\r\n## Actual results\r\n\r\nerrors because it is trying to decode file from the non existing local path.\r\n```\r\n----> print(test_dataset['train'][0])\r\n\r\nFile ~\/.local\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)\r\n 2152 def __getitem__(self, key): # noqa: F811\r\n 2153 \"\"\"Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).\"\"\"\r\n-> 2154 return self._getitem(\r\n 2155 key,\r\n 2156 )\r\n\r\nFile ~\/.local\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)\r\n 2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)\r\n 2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n-> 2139 formatted_output = format_table(\r\n 2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns\r\n 2141 )\r\n 2142 return formatted_output\r\n\r\nFile ~\/.local\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)\r\n 530 python_formatter = PythonFormatter(features=None)\r\n 531 if format_columns is None:\r\n...\r\n-> 3068 fp = builtins.open(filename, \"rb\")\r\n 3069 exclusive_fp = True\r\n 3071 try:\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: 'test.jpg'\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4591\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4591\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4590","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4590\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4590\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4590\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4590","id":1287941058,"node_id":"PR_kwDODunzps46htv0","number":4590,"title":"Generalize meta_path json file creation in load.py [#4540]","user":{"login":"VijayKalmath","id":20517962,"node_id":"MDQ6VXNlcjIwNTE3OTYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20517962?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VijayKalmath","html_url":"https:\/\/github.com\/VijayKalmath","followers_url":"https:\/\/api.github.com\/users\/VijayKalmath\/followers","following_url":"https:\/\/api.github.com\/users\/VijayKalmath\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VijayKalmath\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VijayKalmath\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VijayKalmath\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VijayKalmath\/orgs","repos_url":"https:\/\/api.github.com\/users\/VijayKalmath\/repos","events_url":"https:\/\/api.github.com\/users\/VijayKalmath\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VijayKalmath\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@albertvillanova, Can you please review this PR for Issue #4540 ","@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningful contributions.","Hi ! Sure feel free to join our discord ^^ \r\nhttps:\/\/discuss.huggingface.co\/t\/join-the-hugging-face-discord\/11263 so that we can discuss together mor eeasily. Otherwise everything happens on github ;)"],"created_at":1656452886000,"updated_at":1657292113000,"closed_at":1657199865000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"# What does this PR do?\r\n\r\n## Summary\r\n\r\n*In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.*\r\n\r\n## Additions\r\n-\r\n\r\n## Changes\r\n- Changed meta_path to use `os.path.splitext` instead of using `str.split` to generalize code.\r\n\r\n## Deletions\r\n-\r\n## Issues Addressed : \r\n\r\nFixes #4540","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4590\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4590\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4590","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4590","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4590.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4590.patch","merged_at":1657199864000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4589","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4589\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4589\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4589\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4589","id":1287600029,"node_id":"I_kwDODunzps5Mvzed","number":4589,"title":"Permission denied: '\/home\/.cache' when load_dataset with local script","user":{"login":"jiangh0","id":24559732,"node_id":"MDQ6VXNlcjI0NTU5NzMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24559732?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jiangh0","html_url":"https:\/\/github.com\/jiangh0","followers_url":"https:\/\/api.github.com\/users\/jiangh0\/followers","following_url":"https:\/\/api.github.com\/users\/jiangh0\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jiangh0\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jiangh0\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jiangh0\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jiangh0\/orgs","repos_url":"https:\/\/api.github.com\/users\/jiangh0\/repos","events_url":"https:\/\/api.github.com\/users\/jiangh0\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jiangh0\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1656433563000,"updated_at":1656483988000,"closed_at":1656483908000,"author_association":"NONE","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4589\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4589\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4588","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4588\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4588\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4588\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4588","id":1287368751,"node_id":"PR_kwDODunzps46f2kF","number":4588,"title":"Host head_qa data on the Hub and fix NonMatchingChecksumError","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks \ud83d\ude4f ","@younesbelkada we have just merged this PR."],"created_at":1656423568000,"updated_at":1657036875000,"closed_at":1657036192000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- Hosts head_qa data on the Hub instead of Google Drive\r\n- Fixes NonMatchingChecksumError\r\n\r\nFix https:\/\/huggingface.co\/datasets\/head_qa\/discussions\/1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4588\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":1,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4588\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4588","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4588","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4588.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4588.patch","merged_at":1657036192000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4587","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4587\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4587\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4587\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4587","id":1287291494,"node_id":"PR_kwDODunzps46flzR","number":4587,"title":"Validate new_fingerprint passed by user","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656420381000,"updated_at":1656425517000,"closed_at":1656424844000,"author_association":"MEMBER","active_lock_reason":null,"body":"Users can pass the dataset fingerprint they want in `map` and other dataset transforms.\r\n\r\nHowever the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/1718, and that it's not too long","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4587\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4587\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4587","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4587","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4587.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4587.patch","merged_at":1656424844000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4586","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4586\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4586\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4586\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4586","id":1287105636,"node_id":"PR_kwDODunzps46e9xB","number":4586,"title":"Host pn_summary data on the Hub instead of Google Drive","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656410705000,"updated_at":1656427976000,"closed_at":1656427323000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix #4581.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4586\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4586\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4586","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4586","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4586.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4586.patch","merged_at":1656427323000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4585","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4585\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4585\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4585\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4585","id":1287064929,"node_id":"PR_kwDODunzps46e1Ne","number":4585,"title":"Host multi_news data on the Hub instead of Google Drive","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656408726000,"updated_at":1656425975000,"closed_at":1656425328000,"author_association":"MEMBER","active_lock_reason":null,"body":"Host data files of multi_news dataset on the Hub.\r\n\r\nThey were on Google Drive.\r\n\r\nFix #4580.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4585\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4585\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4585","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4585","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4585.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4585.patch","merged_at":1656425328000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4584","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4584\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4584\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4584\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4584","id":1286911993,"node_id":"PR_kwDODunzps46eVF7","number":4584,"title":"Add binary classification task IDs","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4584). All of your documentation changes will be reflected on that endpoint.","> Awesome thanks ! Can you add it to https:\/\/github.com\/huggingface\/hub-docs\/blob\/main\/js\/src\/lib\/interfaces\/Types.ts first please ? This is where we define the cross libraries tasks taxonomy ;)\r\n\r\nThanks for the tip! Done in https:\/\/github.com\/huggingface\/hub-docs\/pull\/217"],"created_at":1656401439000,"updated_at":1657120794000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification.\r\n\r\nThis PR adds binary classification to the task IDs to enable this.\r\n\r\nRelated AutoTrain issue: https:\/\/github.com\/huggingface\/autonlp-backend\/issues\/597\r\n\r\ncc @abhishekkrthakur @SBrandeis ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4584\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4584\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4584","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4584","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4584.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4584.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4583","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4583\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4583\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4583\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4583","id":1286790871,"node_id":"PR_kwDODunzps46d7xo","number":4583,"title":" implementation of FLAC support using torchaudio","user":{"login":"rafael-ariascalles","id":45745870,"node_id":"MDQ6VXNlcjQ1NzQ1ODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45745870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rafael-ariascalles","html_url":"https:\/\/github.com\/rafael-ariascalles","followers_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/followers","following_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/orgs","repos_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/repos","events_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rafael-ariascalles\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1656393861000,"updated_at":1656395222000,"closed_at":1656395222000,"author_association":"NONE","active_lock_reason":null,"body":"I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https:\/\/mlcommons.org\/en\/peoples-speech\/","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4583\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4583\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4583","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4583","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4583.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4583.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4582","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4582\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4582\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4582\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4582","id":1286517060,"node_id":"PR_kwDODunzps46dC59","number":4582,"title":"add_column should preserve _indexes","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4582). All of your documentation changes will be reflected on that endpoint."],"created_at":1656369347000,"updated_at":1657120794000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"https:\/\/github.com\/huggingface\/datasets\/issues\/3769#issuecomment-1167146126\r\n\r\ndoing `.add_column(\"x\",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case.\r\n\r\nThis was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init.\r\nwith this PR now can pass 'indexes' on init through `IndexableMixin`\r\n\r\n- [x] Added test","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4582\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4582\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4582","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4582","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4582.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4582.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4581","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4581\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4581\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4581\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4581","id":1286362907,"node_id":"I_kwDODunzps5MrFcb","number":4581,"title":"Dataset Viewer issue for pn_summary","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["linked to https:\/\/github.com\/huggingface\/datasets\/issues\/4580#issuecomment-1168373066?","Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('https:\/\/doc-14-4c-docs.googleusercontent.com\/docs\/securesc\/ha0ro937gcuc7l7deffksulhg5h7mbp1\/pgotjmcuh77q0lk7p44rparfrhv459kp\/1656403650000\/11771870722949762109\/*\/16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO?e=download')\r\n```\r\n\r\nLike the three splits are processed in parallel by the workers, I imagine that the Google hosting is rate-limiting us.\r\n\r\ncc @albertvillanova \r\n\r\n","Exactly, Google Drive bans our loading scripts.\r\n\r\nWhen possible, we should host somewhere else."],"created_at":1656363372000,"updated_at":1656427323000,"closed_at":1656427323000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/pn_summary\/viewer\/1.0.0\/validation\n\n### Description\n\nGetting an index error on the `validation` and `test` splits:\r\n\r\n```\r\nServer error\r\nStatus code: 400\r\nException: IndexError\r\nMessage: list index out of range\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4581\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4581\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4580","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4580\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4580\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4580\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4580","id":1286312912,"node_id":"I_kwDODunzps5Mq5PQ","number":4580,"title":"Dataset Viewer issue for multi_news","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. Let's see if the license allows that.","I guess we can host the data: https:\/\/github.com\/Alex-Fabbri\/Multi-News\/blob\/master\/LICENSE.txt"],"created_at":1656361525000,"updated_at":1656425328000,"closed_at":1656425328000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/multi_news\n\n### Description\n\nNot sure what the index error is referring to here:\r\n\r\n```\r\nStatus code: 400\r\nException: IndexError\r\nMessage: list index out of range\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4580\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4580\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4579","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4579\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4579\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4579\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4579","id":1286106285,"node_id":"PR_kwDODunzps46bo2h","number":4579,"title":"Support streaming cfq dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq I've been refactoring a little the code:\r\n- Use less RAM by loading only the required samples: only if its index is in the splits file\r\n- Start yielding \"earlier\" in streaming mode: for each `split_idx`:\r\n - either yield from buffer\r\n - or iterate over samples and either yield or buffer the sample\r\n \r\n The speed gain obviously depends on how the indexes are sorted in the split file:\r\n - Best case: indices are [1, 2, 3]\r\n - Worst case (no speed gain): indices are [3, 1, 2] or [3, 2, 1]\r\n\r\nLet me know what you think.","I have to update the dummy data so that it aligns with the real data (inside the archive, the samples file `dataset.json` is the last member).","There is an issue when testing `test_load_dataset_cfq` with dummy data:\r\n- `MockDownloadManager.iter_archive` yields FIRST `'cfq\/dataset.json'`\r\n- [`Streaming`]`DownloadManager.iter_archive` yields LAST `'cfq\/dataset.json'` when using real data tar.gz archive\r\n\r\nNote that this issue arises only with dummy data: loading the real dataset works smoothly for all configurations: I recreated the `dataset_infos.json` file to check it (it generated the same file).","This PR should be merged first:\r\n- #4611","Impressive, thank you ! :o \r\n\r\nfeel free to merge master into this branch, now that the files order is respected. You can merge if the CI is green :)"],"created_at":1656349883000,"updated_at":1656963301000,"closed_at":1656962637000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming cfq dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4579\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4579\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4579","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4579","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4579.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4579.patch","merged_at":1656962637000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4578","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4578\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4578\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4578\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4578","id":1286086400,"node_id":"I_kwDODunzps5MqB8A","number":4578,"title":"[Multi Configs] Use directories to differentiate between subsets\/configurations","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1656348911000,"updated_at":1656348919000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently to define several subsets\/configurations of your dataset, you need to use a dataset script.\r\n\r\nHowever it would be nice to have a no-code way to to this. \r\n\r\nFor example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per configuration.\r\n\r\nThese structures are not supported right now, but would be nice to have:\r\n\r\n\r\n```\r\nmy_dataset_repository\/\r\n\u251c\u2500\u2500 README.md\r\n\u251c\u2500\u2500 en\/\r\n\u2502 \u251c\u2500\u2500 train.csv\r\n\u2502 \u2514\u2500\u2500 test.csv\r\n\u2514\u2500\u2500 fr\/\r\n \u251c\u2500\u2500 train.csv\r\n \u2514\u2500\u2500 test.csv\r\n```\r\n\r\nOr with one directory per split:\r\n\r\n```\r\nmy_dataset_repository\/\r\n\u251c\u2500\u2500 README.md\r\n\u251c\u2500\u2500 en\/\r\n\u2502 \u251c\u2500\u2500 train\/\r\n\u2502 \u2502 \u251c\u2500\u2500 shard_0.csv\r\n\u2502 \u2502 \u2514\u2500\u2500 shard_1.csv\r\n\u2502 \u2514\u2500\u2500 test\/\r\n\u2502 \u251c\u2500\u2500 shard_0.csv\r\n\u2502 \u2514\u2500\u2500 shard_1.csv\r\n\u2514\u2500\u2500 fr\/\r\n \u251c\u2500\u2500 train\/\r\n \u2502 \u251c\u2500\u2500 shard_0.csv\r\n \u2502 \u2514\u2500\u2500 shard_1.csv\r\n \u2514\u2500\u2500 test\/\r\n \u251c\u2500\u2500 shard_0.csv\r\n \u2514\u2500\u2500 shard_1.csv\r\n```\r\n\r\ncc @stevhliu @albertvillanova ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4578\/reactions","total_count":5,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":3,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4578\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4577","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4577\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4577\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4577\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4577","id":1285703775,"node_id":"PR_kwDODunzps46aTWL","number":4577,"title":"Add authentication tip to `load_dataset`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656331534000,"updated_at":1656940395000,"closed_at":1656939690000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`\/`load_dataset_builder`.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4577\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4577\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4577","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4577","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4577.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4577.patch","merged_at":1656939690000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4576","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4576\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4576\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4576\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4576","id":1285698576,"node_id":"PR_kwDODunzps46aSN_","number":4576,"title":"Include `metadata.jsonl` in resolved data files","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv\/parquet\/whatever files and a metadata.jsonl file, it would return \r\n```\r\nsplit: patterns_dict[split] + [METADATA_PATTERN]\r\n```\r\nwhich is a bit unexpected and can lead to errors.\r\n\r\nMaybe this logic can be specific to imagefolder somehow ? This could be an additional pattern `[\"metadata.jsonl\", \"**\/metadata.jsonl\"]` just for imagefolder, that is only used when `data_files=` is not specified by the user.\r\n\r\nI guess it's ok to have patterns that lead to duplicate metadata.jsonl files for imagefolder, since the imagefolder logic only considers the closest metadata file for each image.\r\n\r\nWhat do you think ?","Yes, that's indeed the problem. My solution in https:\/\/github.com\/huggingface\/datasets\/commit\/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 that accounts for that (include metadata files only if image files are present; not ideal): https:\/\/github.com\/huggingface\/datasets\/blob\/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95\/src\/datasets\/data_files.py#L119-L125.\r\nPerhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as `imagefolder` and append metadata files to already resolved data files (if there are any). WDYT?","@lhoestq \r\n\r\n> Perhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as imagefolder and append metadata files to already resolved data files (if there are any). WDYT?\r\n\r\nI decided to go with this approach.\r\n\r\n Not sure if you meant the same thing with this comment:\r\n\r\n> Maybe this logic can be specific to imagefolder somehow ? This could be an additional pattern [\"metadata.jsonl\", \"**\/metadata.jsonl\"] just for imagefolder, that is only used when data_files= is not specified by the user.\r\n\r\n\r\nIt adds more code but is easy to follow IMO.\r\n","The CI still struggles but you can merge since at least one of the two WIN CI succeeded"],"created_at":1656331289000,"updated_at":1656679495000,"closed_at":1656584132000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Include `metadata.jsonl` in resolved data files.\r\n\r\nFix #4548 \r\n\r\n@lhoestq ~~https:\/\/github.com\/huggingface\/datasets\/commit\/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https:\/\/github.com\/huggingface\/datasets\/commit\/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts for nested metadata files also, but this results in more complex code. Let me know which one of these two approaches you prefer.~~ Maybe https:\/\/github.com\/huggingface\/datasets\/commit\/d94336d30eef17fc9abc67f67fa1c139661f4e75 is good enough for now (for the sake of simplicity). https:\/\/github.com\/huggingface\/datasets\/commit\/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 breaks the imagefolder tests due to duplicates in the resolved metadata files. One way to fix this would be to resolve the metadata pattern only on parent directories, but this adds even more logic to `_get_data_files_patterns`, so not sure if this is what we should do.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4576\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4576\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4576","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4576","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4576.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4576.patch","merged_at":1656584131000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4575","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4575\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4575\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4575\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4575","id":1285446700,"node_id":"I_kwDODunzps5Mnlws","number":4575,"title":"Problem about wmt17 zh-en dataset","user":{"login":"winterfell2021","id":85819194,"node_id":"MDQ6VXNlcjg1ODE5MTk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/85819194?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/winterfell2021","html_url":"https:\/\/github.com\/winterfell2021","followers_url":"https:\/\/api.github.com\/users\/winterfell2021\/followers","following_url":"https:\/\/api.github.com\/users\/winterfell2021\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/winterfell2021\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/winterfell2021\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/winterfell2021\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/winterfell2021\/orgs","repos_url":"https:\/\/api.github.com\/users\/winterfell2021\/repos","events_url":"https:\/\/api.github.com\/users\/winterfell2021\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/winterfell2021\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Running into the same error with `wmt17\/zh-en`, `wmt18\/zh-en` and `wmt19\/zh-en`.","@albertvillanova @lhoestq Could you take a look at this issue?","@winterfell2021 Hi, I wonder where the code you provided should be added. I tried to add them in the `datasets\/table.py` in `array_cast` function, however, the 'zh' item is none.","I found some 'zh' item is none while 'c[hn]' is not.\r\nSo the code may change to:\r\n```python\r\nif 'c[hn]' in str(array.type):\r\n py_array = array.to_pylist()\r\n data_list = []\r\n for vo in py_array:\r\n tmp = {\r\n 'en': vo['en'],\r\n }\r\n if vo.get('zh'):\r\n tmp['zh'] = vo['zh']\r\n else:\r\n tmp['zh'] = vo['c[hn]']\r\n data_list.append(tmp)\r\n array = pa.array(data_list, type=pa.struct([\r\n pa.field('en', pa.string()),\r\n pa.field('zh', pa.string()),\r\n ]))\r\n```","I just pushed a fix, we'll do a new release of `datasets` soon to include this fix. In the meantime you can use the fixed dataset by passing `revision=\"main\"` to `load_dataset`"],"created_at":1656318942000,"updated_at":1661248862000,"closed_at":1661248821000,"author_association":"NONE","active_lock_reason":null,"body":"It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`.\r\nSo when using `data = load_dataset('wmt17', \"zh-en\")` to load the wmt17 zh-en dataset, which will raise the exception:\r\n```\r\nTraceback (most recent call last):\r\n File \"train.py\", line 78, in \r\n data = load_dataset(args.dataset, \"zh-en\")\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py\", line 1684, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py\", line 705, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py\", line 1221, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py\", line 793, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py\", line 1215, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py\", line 533, in finalize\r\n self.write_examples_on_file()\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py\", line 410, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py\", line 503, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow\/array.pxi\", line 230, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py\", line 198, in __arrow_array__\r\n out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/table.py\", line 1675, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/table.py\", line 1846, in cast_array_to_feature\r\n return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/table.py\", line 1675, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/table.py\", line 1756, in array_cast\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{pa_type}\")\r\nTypeError: Couldn't cast array of type\r\nstruct\r\nto\r\nstruct\r\n```\r\n\r\nSo the solution of this problem is to change the original array manually:\r\n```\r\nif 'c[hn]' in str(array.type):\r\n py_array = array.to_pylist()\r\n data_list = []\r\n for vo in py_array:\r\n tmp = {\r\n 'en': vo['en'],\r\n }\r\n if 'zh' not in vo:\r\n tmp['zh'] = vo['c[hn]']\r\n else:\r\n tmp['zh'] = vo['zh']\r\n data_list.append(tmp)\r\n array = pa.array(data_list, type=pa.struct([\r\n pa.field('en', pa.string()),\r\n pa.field('zh', pa.string()),\r\n ]))\r\n```\r\n\r\nTherefore, maybe a correct version of original casia2015 file need to be updated","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4575\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4575\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4574","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4574\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4574\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4574\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4574","id":1285380616,"node_id":"PR_kwDODunzps46ZOpZ","number":4574,"title":"Support streaming mlsum dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","After unpinning `s3fs` and pinning `fsspec[http]>=2021.11.1`, the CI installs\r\n- `fsspec-2022.1.0`\r\n- `s3fs-0.5.1`\r\n\r\nand raises the following error:\r\n```\r\n ImportError while loading conftest '\/home\/runner\/work\/datasets\/datasets\/tests\/conftest.py'.\r\ntests\/conftest.py:13: in \r\n import datasets\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/datasets\/__init__.py:37: in \r\n from .arrow_dataset import Dataset\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py:62: in \r\n from .arrow_reader import ArrowReader\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/datasets\/arrow_reader.py:29: in \r\n from .download.download_config import DownloadConfig\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/datasets\/download\/__init__.py:10: in \r\n from .streaming_download_manager import StreamingDownloadManager\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/datasets\/download\/streaming_download_manager.py:20: in \r\n from ..filesystems import COMPRESSION_FILESYSTEMS\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/datasets\/filesystems\/__init__.py:13: in \r\n from .s3filesystem import S3FileSystem # noqa: F401\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/datasets\/filesystems\/s3filesystem.py:1: in \r\n import s3fs\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/s3fs\/__init__.py:1: in \r\n from .core import S3FileSystem, S3File\r\n\/opt\/hostedtoolcache\/Python\/3.6.15\/x64\/lib\/python3.6\/site-packages\/s3fs\/core.py:12: in \r\n from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync\r\nE ImportError: cannot import name 'maybe_sync'\r\n```\r\n\r\nThe installed `s3fs` version is too old. What about pinning a min version?","Maybe you can try setting the same minimum version as fsspec ? `s3fs>=2021.11.1`","Yes, I have checked that they both require to have the same version. \r\n\r\nThe issue then was coming from aiobotocore, boto3, botocore. I have changed them from strict to min version requirements.\r\n> s3fs 2021.11.1 depends on aiobotocore~=2.0.1","I have updated all min versions so that they are compatible one with each other. I'm pushing again...","Thanks !","Nice!"],"created_at":1656315423000,"updated_at":1658410650000,"closed_at":1658407200000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming mlsum dataset.\r\n\r\nThis PR:\r\n- pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1`\r\n - https:\/\/github.com\/fsspec\/filesystem_spec\/pull\/830\r\n- unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1`\r\n > s3fs 2021.8.1 requires fsspec==2021.08.1\r\n - see discussion: https:\/\/github.com\/huggingface\/datasets\/pull\/2858\/files#r700027326\r\n- updates the following requirements to be compatible with the previous ones and one with each other:\r\n - `aiobotocore==1.4.2` to `aiobotocore>=2.0.1` (required by s3fs>=2021.11.1)\r\n - `boto3==1.17.106` to `boto3>=1.19.8` (to be compatible with aiobotocore>=2.0.1)\r\n - `botocore==1.20.106` to `botocore>=1.22.8` (to be compatible with aiobotocore and boto3)\r\n\r\nFix #4572.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4574\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4574\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4574","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4574","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4574.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4574.patch","merged_at":1658407200000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4573","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4573\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4573\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4573\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4573","id":1285023629,"node_id":"PR_kwDODunzps46YEEa","number":4573,"title":"Fix evaluation metadata for ncbi_disease","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4573). All of your documentation changes will be reflected on that endpoint."],"created_at":1656275372000,"updated_at":1657120794000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4573\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4573\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4573","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4573","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4573.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4573.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4572","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4572\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4572\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4572\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4572","id":1285022499,"node_id":"I_kwDODunzps5Ml-Mj","number":4572,"title":"Dataset Viewer issue for mlsum","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https:\/\/gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..."],"created_at":1656275057000,"updated_at":1658407201000,"closed_at":1658407201000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/mlsum\/viewer\/de\/train\n\n### Description\n\nThere's seems to be a problem with the download \/ streaming of this dataset:\r\n\r\n```\r\nServer error\r\nStatus code: 400\r\nException: BadZipFile\r\nMessage: File is not a zip file\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4572\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4572\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4571","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4571\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4571\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4571\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4571","id":1284883289,"node_id":"I_kwDODunzps5MlcNZ","number":4571,"title":"Dataset Viewer issue for gsarti\/flores_101","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Related to https:\/\/github.com\/huggingface\/datasets\/issues\/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ","I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?"],"created_at":1656242349000,"updated_at":1662624998000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/gsarti\/flores_101\n\n### Description\n\nIt seems like streaming isn't supported for this dataset:\r\n\r\n```\r\nServer Error\r\nStatus code: 400\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https:\/\/dl.fbaipublicfiles.com\/flores101\/dataset\/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4571\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4571\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4570","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4570\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4570\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4570\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4570","id":1284846168,"node_id":"I_kwDODunzps5MlTJY","number":4570,"title":"Dataset sharding non-contiguous?","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.","Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https:\/\/huggingface.co\/docs\/datasets\/v2.3.2\/en\/package_reference\/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread \ud83d\ude04 ","Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ","@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ","This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/data\/Dataset#shard)."],"created_at":1656232445000,"updated_at":1656586847000,"closed_at":1656254180000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nI'm not sure if this is a bug; more likely normal behavior but i wanted to double check.\r\nIs it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset? \r\n\r\nThis might be related to this pull request (https:\/\/github.com\/huggingface\/datasets\/pull\/4466) but I have to admit I did not properly look into the changes made.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nmax_shard_size = convert_file_size_to_int('300MB')\r\ndataset_nbytes = dataset.data.nbytes\r\nnum_shards = int(dataset_nbytes \/ max_shard_size) + 1\r\nnum_shards = max(num_shards, 1)\r\nprint(f\"{num_shards=}\")\r\nfor shard_index in range(num_shards):\r\n shard = dataset.shard(num_shards=num_shards, index=shard_index)\r\n shard.to_parquet(f\"tokenized\/tokenized-{shard_index:03d}.parquet\")\r\nos.listdir('tokenized\/')\r\n```\r\n\r\n## Expected results\r\nI expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example\r\n\r\n## Actual results\r\nOnly the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]`\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.4\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4570\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4570\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4569","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4569\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4569\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4569\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4569","id":1284833694,"node_id":"I_kwDODunzps5MlQGe","number":4569,"title":"Dataset Viewer issue for sst2","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nCould you confirm? ","Thanks @albertvillanova - it is indeed working now (not sure what caused the error in the first place). Closing this :)"],"created_at":1656228774000,"updated_at":1656311868000,"closed_at":1656311868000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\r\n\r\nhttps:\/\/huggingface.co\/datasets\/sst2\r\n\r\n### Description\r\n\r\nNot sure what is causing this, however it seems that `load_dataset(\"sst2\")` also hangs (even though it downloads the files without problem):\r\n\r\n```\r\nStatus code: 400\r\nException: Exception\r\nMessage: Give up after 5 attempts with ConnectionError\r\n```\r\n\r\n### Owner\r\n\r\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4569\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4569\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4568","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4568\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4568\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4568\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4568","id":1284655624,"node_id":"I_kwDODunzps5MkkoI","number":4568,"title":"XNLI cache reload is very slow","user":{"login":"Muennighoff","id":62820084,"node_id":"MDQ6VXNlcjYyODIwMDg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/62820084?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Muennighoff","html_url":"https:\/\/github.com\/Muennighoff","followers_url":"https:\/\/api.github.com\/users\/Muennighoff\/followers","following_url":"https:\/\/api.github.com\/users\/Muennighoff\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Muennighoff\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Muennighoff\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Muennighoff\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Muennighoff\/orgs","repos_url":"https:\/\/api.github.com\/users\/Muennighoff\/repos","events_url":"https:\/\/api.github.com\/users\/Muennighoff\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Muennighoff\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n\"Screen\r\nTested on both stable and dev version. ","Sure, I was running it on a Linux machine.\r\nI found that if I turn the Internet off, it would still try to make a HTTPS call which would slow down the cache loading. If you can't reproduce then we can close the issue.","Hi @Muennighoff! You can set the env variable `HF_DATASETS_OFFLINE` to `1` to avoid this behavior in offline mode. More info is available [here](https:\/\/huggingface.co\/docs\/datasets\/master\/en\/loading#offline)."],"created_at":1656175436000,"updated_at":1656944980000,"closed_at":1656944980000,"author_association":"NONE","active_lock_reason":null,"body":"### Reproduce\r\n\r\nUsing `2.3.3.dev0`\r\n\r\n`from datasets import load_dataset`\r\n`load_dataset(\"xnli\", \"en\")`\r\nTurn off Internet\r\n`load_dataset(\"xnli\", \"en\")`\r\n\r\nI cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the library trying to download when there is no Internet. If I leave it running it works but takes way longer than when there is Internet. I would expect loading from cache to take the same amount of time regardless of whether there is Internet.\r\n```\r\n---------------------------------------------------------------------------\r\ngaierror Traceback (most recent call last)\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/urllib3\/connection.py in _new_conn(self)\r\n 174 conn = connection.create_connection(\r\n--> 175 (self._dns_host, self.port), self.timeout, **extra_kw\r\n 176 )\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/urllib3\/util\/connection.py in create_connection(address, timeout, source_address, socket_options)\r\n 71 \r\n---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\r\n 73 af, socktype, proto, canonname, sa = res\r\n\r\n\/opt\/conda\/lib\/python3.7\/socket.py in getaddrinfo(host, port, family, type, proto, flags)\r\n 751 addrlist = []\r\n--> 752 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):\r\n 753 af, socktype, proto, canonname, sa = res\r\n\r\ngaierror: [Errno -3] Temporary failure in name resolution\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nKeyboardInterrupt Traceback (most recent call last)\r\n\/tmp\/ipykernel_33\/3594208039.py in \r\n----> 1 load_dataset(\"xnli\", \"en\")\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1673 revision=revision,\r\n 1674 use_auth_token=use_auth_token,\r\n-> 1675 **config_kwargs,\r\n 1676 )\r\n 1677 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1494 download_mode=download_mode,\r\n 1495 data_dir=data_dir,\r\n-> 1496 data_files=data_files,\r\n 1497 )\r\n 1498 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1182 download_config=download_config,\r\n 1183 download_mode=download_mode,\r\n-> 1184 dynamic_modules_path=dynamic_modules_path,\r\n 1185 ).get_module()\r\n 1186 elif path.count(\"\/\") == 1: # community dataset on the Hub\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py in __init__(self, name, revision, download_config, download_mode, dynamic_modules_path)\r\n 506 self.dynamic_modules_path = dynamic_modules_path\r\n 507 assert self.name.count(\"\/\") == 0\r\n--> 508 increase_load_count(name, resource_type=\"dataset\")\r\n 509 \r\n 510 def download_loading_script(self, revision: Optional[str]) -> str:\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py in increase_load_count(name, resource_type)\r\n 166 if not config.HF_DATASETS_OFFLINE and config.HF_UPDATE_DOWNLOAD_COUNTS:\r\n 167 try:\r\n--> 168 head_hf_s3(name, filename=name + \".py\", dataset=(resource_type == \"dataset\"))\r\n 169 except Exception:\r\n 170 pass\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py in head_hf_s3(identifier, filename, use_cdn, dataset, max_retries)\r\n 93 return http_head(\r\n 94 hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset),\r\n---> 95 max_retries=max_retries,\r\n 96 )\r\n 97 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries)\r\n 445 allow_redirects=allow_redirects,\r\n 446 timeout=timeout,\r\n--> 447 max_retries=max_retries,\r\n 448 )\r\n 449 return response\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)\r\n 366 tries += 1\r\n 367 try:\r\n--> 368 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)\r\n 369 success = True\r\n 370 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/requests\/api.py in request(method, url, **kwargs)\r\n 59 # cases, and look like a memory leak in others.\r\n 60 with sessions.Session() as session:\r\n---> 61 return session.request(method=method, url=url, **kwargs)\r\n 62 \r\n 63 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/requests\/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)\r\n 527 }\r\n 528 send_kwargs.update(settings)\r\n--> 529 resp = self.send(prep, **send_kwargs)\r\n 530 \r\n 531 return resp\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/requests\/sessions.py in send(self, request, **kwargs)\r\n 643 \r\n 644 # Send the request\r\n--> 645 r = adapter.send(request, **kwargs)\r\n 646 \r\n 647 # Total elapsed time of the request (approximately)\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/requests\/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)\r\n 448 decode_content=False,\r\n 449 retries=self.max_retries,\r\n--> 450 timeout=timeout\r\n 451 )\r\n 452 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/urllib3\/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)\r\n 708 body=body,\r\n 709 headers=headers,\r\n--> 710 chunked=chunked,\r\n 711 )\r\n 712 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/urllib3\/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)\r\n 384 # Trigger any extra validation we need to do.\r\n 385 try:\r\n--> 386 self._validate_conn(conn)\r\n 387 except (SocketTimeout, BaseSSLError) as e:\r\n 388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/urllib3\/connectionpool.py in _validate_conn(self, conn)\r\n 1038 # Force connect early to allow us to validate the connection.\r\n 1039 if not getattr(conn, \"sock\", None): # AppEngine might not have `.sock`\r\n-> 1040 conn.connect()\r\n 1041 \r\n 1042 if not conn.is_verified:\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/urllib3\/connection.py in connect(self)\r\n 356 def connect(self):\r\n 357 # Add certificate verification\r\n--> 358 self.sock = conn = self._new_conn()\r\n 359 hostname = self.host\r\n 360 tls_in_tls = False\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/urllib3\/connection.py in _new_conn(self)\r\n 173 try:\r\n 174 conn = connection.create_connection(\r\n--> 175 (self._dns_host, self.port), self.timeout, **extra_kw\r\n 176 )\r\n 177 \r\n\r\nKeyboardInterrupt: \r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4568\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4568\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4567","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4567\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4567\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4567\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4567","id":1284528474,"node_id":"PR_kwDODunzps46Wh0-","number":4567,"title":"Add evaluation data for amazon_reviews_multi","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4567). All of your documentation changes will be reflected on that endpoint."],"created_at":1656150052000,"updated_at":1657120794000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4567\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4567\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4567","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4567","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4567.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4567.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4566","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4566\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4566\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4566\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4566","id":1284397594,"node_id":"I_kwDODunzps5Mjloa","number":4566,"title":"Document link #load_dataset_enhancing_performance points to nowhere","user":{"login":"subercui","id":11674033,"node_id":"MDQ6VXNlcjExNjc0MDMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11674033?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/subercui","html_url":"https:\/\/github.com\/subercui","followers_url":"https:\/\/api.github.com\/users\/subercui\/followers","following_url":"https:\/\/api.github.com\/users\/subercui\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/subercui\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/subercui\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/subercui\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/subercui\/orgs","repos_url":"https:\/\/api.github.com\/users\/subercui\/repos","events_url":"https:\/\/api.github.com\/users\/subercui\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/subercui\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?","https:\/\/github.com\/huggingface\/datasets\/blame\/master\/docs\/source\/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documentation works."],"created_at":1656119899000,"updated_at":1656534594000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/11674033\/175752806-5b066b92-9d28-4771-9112-5c8606f07741.png)\r\n\r\n\r\nThe [load_dataset_enhancing_performance](https:\/\/huggingface.co\/docs\/datasets\/v2.3.2\/en\/package_reference\/main_classes#load_dataset_enhancing_performance) link [here](https:\/\/huggingface.co\/docs\/datasets\/v2.3.2\/en\/package_reference\/main_classes#datasets.Dataset.load_from_disk.keep_in_memory) points to nowhere, I guess it should point to https:\/\/huggingface.co\/docs\/datasets\/v2.3.2\/en\/cache#improve-performance?\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4566\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4566\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4565","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4565\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4565\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4565\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4565","id":1284141666,"node_id":"I_kwDODunzps5MinJi","number":4565,"title":"Add UFSC OCPap dataset","user":{"login":"johnnv1","id":20444345,"node_id":"MDQ6VXNlcjIwNDQ0MzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20444345?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/johnnv1","html_url":"https:\/\/github.com\/johnnv1","followers_url":"https:\/\/api.github.com\/users\/johnnv1\/followers","following_url":"https:\/\/api.github.com\/users\/johnnv1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/johnnv1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/johnnv1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/johnnv1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/johnnv1\/orgs","repos_url":"https:\/\/api.github.com\/users\/johnnv1\/repos","events_url":"https:\/\/api.github.com\/users\/johnnv1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/johnnv1\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I will add this directly on the hub (same as #4486)\u2014in https:\/\/huggingface.co\/lapix"],"created_at":1656101274000,"updated_at":1657134182000,"closed_at":1657134182000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4)\r\n- **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients.\r\n- **Paper:** https:\/\/dx.doi.org\/10.2139\/ssrn.4119212\r\n- **Data:** https:\/\/data.mendeley.com\/datasets\/dr7ydy9xbk\/1\r\n- **Motivation:** real data of pap stained oral cytology samples\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4565\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4565\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4564","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4564\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4564\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4564\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4564","id":1283932333,"node_id":"PR_kwDODunzps46UqUN","number":4564,"title":"Support streaming bookcorpus dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656087219000,"updated_at":1657100088000,"closed_at":1657099384000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming bookcorpus dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4564\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4564\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4564","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4564","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4564.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4564.patch","merged_at":1657099384000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4563","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4563\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4563\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4563\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4563","id":1283914383,"node_id":"PR_kwDODunzps46UmZQ","number":4563,"title":"Support streaming allocine dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656086103000,"updated_at":1656089697000,"closed_at":1656089081000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming allocine dataset.\r\n\r\nFix #4562.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4563\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4563\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4563","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4563","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4563.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4563.patch","merged_at":1656089081000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4562","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4562\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4562\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4562\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4562","id":1283779557,"node_id":"I_kwDODunzps5MhOvl","number":4562,"title":"Dataset Viewer issue for allocine","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I removed my assignment as @huggingface\/datasets should be able to answer better than me\r\n","Let me have a look...","Thanks for the quick fix @albertvillanova ","Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content *sequentially* (no random access).","> Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content _sequentially_ (no random access).\r\n\r\nAh thanks for the clarification! I'll look out for this next time and implement the fix myself :)"],"created_at":1656078638000,"updated_at":1656311972000,"closed_at":1656089081000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/allocine\n\n### Description\n\nNot sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed:\r\n\r\n```\r\nStatus code: 400\r\nException: AttributeError\r\nMessage: 'TarContainedFile' object has no attribute 'readable'\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4562\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4562\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4561","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4561\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4561\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4561\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4561","id":1283624242,"node_id":"PR_kwDODunzps46TnVe","number":4561,"title":"Add evaluation data to acronym_identification","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656069453000,"updated_at":1656322675000,"closed_at":1656319762000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4561\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4561\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4561","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4561","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4561.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4561.patch","merged_at":1656319762000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4560","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4560\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4560\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4560\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4560","id":1283558873,"node_id":"PR_kwDODunzps46TY9n","number":4560,"title":"Add evaluation metadata to imagenet-1k","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4560). All of your documentation changes will be reflected on that endpoint."],"created_at":1656065561000,"updated_at":1657120794000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4560\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4560\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4560","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4560","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4560.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4560.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4559","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4559\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4559\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4559\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4559","id":1283544937,"node_id":"PR_kwDODunzps46TV7-","number":4559,"title":"Add action names in schema_guided_dstc8 dataset card","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656064801000,"updated_at":1656068068000,"closed_at":1656067427000,"author_association":"MEMBER","active_lock_reason":null,"body":"As aseked in https:\/\/huggingface.co\/datasets\/schema_guided_dstc8\/discussions\/1, I added the action names in the dataset card","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4559\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4559\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4559","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4559","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4559.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4559.patch","merged_at":1656067427000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4558","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4558\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4558\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4558\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4558","id":1283479650,"node_id":"PR_kwDODunzps46THl_","number":4558,"title":"Add evaluation metadata to wmt14","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4558). All of your documentation changes will be reflected on that endpoint."],"created_at":1656061734000,"updated_at":1657200016000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4558\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4558\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4558","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4558","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4558.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4558.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4557","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4557\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4557\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4557\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4557","id":1283473889,"node_id":"PR_kwDODunzps46TGZK","number":4557,"title":"Add evaluation metadata to wmt16","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4557). All of your documentation changes will be reflected on that endpoint.","> Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?\r\n\r\nyes :)"],"created_at":1656061463000,"updated_at":1657200090000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4557\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4557\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4557","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4557","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4557.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4557.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4556","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4556\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4556\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4556\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4556","id":1283462881,"node_id":"I_kwDODunzps5MgBbh","number":4556,"title":"Dataset Viewer issue for conll2003","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Fixed, thanks."],"created_at":1656060918000,"updated_at":1656064239000,"closed_at":1656064239000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/conll2003\/viewer\/conll2003\/test\n\n### Description\n\nSeems like a cache problem with this config \/ split:\r\n\r\n```\r\nServer error\r\nStatus code: 400\r\nException: FileNotFoundError\r\nMessage: [Errno 2] No such file or directory: '\/cache\/modules\/datasets_modules\/datasets\/conll2003\/__init__.py'\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4556\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4556\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4555","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4555\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4555\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4555\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4555","id":1283451651,"node_id":"I_kwDODunzps5Mf-sD","number":4555,"title":"Dataset Viewer issue for xtreme","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Fixed, thanks."],"created_at":1656060368000,"updated_at":1656064245000,"closed_at":1656064245000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/xtreme\/viewer\/PAN-X.de\/test\n\n### Description\n\nThere seems to be a problem with the cache of this config \/ split:\r\n\r\n```\r\nServer error\r\nStatus code: 400\r\nException: FileNotFoundError\r\nMessage: [Errno 2] No such file or directory: '\/cache\/modules\/datasets_modules\/datasets\/xtreme\/349258adc25bb45e47de193222f95e68a44f7a7ab53c4283b3f007208a11bf7e\/xtreme.py'\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4555\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4555\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4554","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4554\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4554\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4554\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4554","id":1283369453,"node_id":"PR_kwDODunzps46Sv_f","number":4554,"title":"Fix WMT dataset loading issue and docs update (Re-opened)","user":{"login":"khushmeeet","id":8711912,"node_id":"MDQ6VXNlcjg3MTE5MTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8711912?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/khushmeeet","html_url":"https:\/\/github.com\/khushmeeet","followers_url":"https:\/\/api.github.com\/users\/khushmeeet\/followers","following_url":"https:\/\/api.github.com\/users\/khushmeeet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/khushmeeet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/khushmeeet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/khushmeeet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/khushmeeet\/orgs","repos_url":"https:\/\/api.github.com\/users\/khushmeeet\/repos","events_url":"https:\/\/api.github.com\/users\/khushmeeet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/khushmeeet\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1656055576000,"updated_at":1657294760000,"closed_at":1657294064000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR is a fix for #4354 \r\n\r\nChanges are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.\r\n\r\nLet me know, if any additional changes are required.\r\n\r\nThanks","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4554\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4554\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4554","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4554","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4554.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4554.patch","merged_at":1657294064000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4553","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4553\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4553\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4553\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4553","id":1282779560,"node_id":"PR_kwDODunzps46Q1q7","number":4553,"title":"Stop dropping columns in to_tf_dataset() before we load batches","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq Rebasing fixed the test failures, so this should be ready to review now! There's still a failure on Win but it seems unrelated.","Gentle ping @lhoestq ! This is a simple fix (dropping columns after loading a batch from the dataset rather than with `.remove_columns()` to make sure we don't break transforms), and tests are green so we're ready for review!","@lhoestq Test is in!"],"created_at":1656008465000,"updated_at":1656961213000,"closed_at":1656960541000,"author_association":"MEMBER","active_lock_reason":null,"body":"`to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instead drop keys from the batch after we load it.\r\n\r\ncc @amyeroberts and https:\/\/github.com\/huggingface\/notebooks\/pull\/202","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4553\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4553\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4553","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4553","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4553.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4553.patch","merged_at":1656960541000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4552","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4552\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4552\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4552\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4552","id":1282615646,"node_id":"PR_kwDODunzps46QSHV","number":4552,"title":"Tell users to upload on the hub directly","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks ! I updated the two remaining files"],"created_at":1655999272000,"updated_at":1656258586000,"closed_at":1656257951000,"author_association":"MEMBER","active_lock_reason":null,"body":"As noted in https:\/\/github.com\/huggingface\/datasets\/pull\/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs.\r\n\r\nMoreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can open a discussion and tag `datasets` maintainers for reviews.\r\n\r\nFinally I removed the _previous good reasons_ to add a dataset on GitHub to only keep this one:\r\n\r\n> In some rare cases it makes more sense to open a PR on GitHub. For example when you are not the author of the dataset and there is no clear organization \/ namespace that you can put the dataset under.\r\n\r\nDoes it sound good to you @albertvillanova @julien-c ?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4552\/reactions","total_count":3,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":3,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4552\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4552","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4552","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4552.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4552.patch","merged_at":1656257951000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4551","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4551\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4551\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4551\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4551","id":1282534807,"node_id":"PR_kwDODunzps46QAV-","number":4551,"title":"Perform hidden file check on relative data file path","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I'm aware of this behavior, which is tricky to solve due to fsspec's hidden file handling (see https:\/\/github.com\/huggingface\/datasets\/issues\/4115#issuecomment-1108819538). I've tested some regex patterns to address this, and they seem to work (will push them on Monday; btw they don't break any of fsspec's tests, so maybe we can contribute this as an enhancement to them). Also, perhaps we should include the files starting with `__` in the results again (we hadn't had issues with this pattern before). WDYT?","I see. Feel free to merge this one if it's good for you btw :)\r\n\r\n> Also, perhaps we should include the files starting with __ in the results again (we hadn't had issues with this pattern before)\r\n\r\nThe point was mainly to ignore `__pycache__` directories for example. Also also for consistency with the iter_files\/iter_archive which are already ignoring them","Very elegant solution! Feel free to merge if the CI is green after adding the tests.","CI failure is unrelated to this PR"],"created_at":1655995751000,"updated_at":1656600560000,"closed_at":1656599898000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4549 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4551\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4551\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4551","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4551","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4551.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4551.patch","merged_at":1656599898000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4550","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4550\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4550\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4550\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4550","id":1282374441,"node_id":"I_kwDODunzps5Mb3sp","number":4550,"title":"imdb source error","user":{"login":"Muhtasham","id":20128202,"node_id":"MDQ6VXNlcjIwMTI4MjAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20128202?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Muhtasham","html_url":"https:\/\/github.com\/Muhtasham","followers_url":"https:\/\/api.github.com\/users\/Muhtasham\/followers","following_url":"https:\/\/api.github.com\/users\/Muhtasham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Muhtasham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Muhtasham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Muhtasham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Muhtasham\/orgs","repos_url":"https:\/\/api.github.com\/users\/Muhtasham\/repos","events_url":"https:\/\/api.github.com\/users\/Muhtasham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Muhtasham\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting, @Muhtasham.\r\n\r\nIndeed IMDB dataset is not accessible from yesterday, because the data is hosted on the data owners servers at Stanford (http:\/\/ai.stanford.edu\/) and these are down due to a power outage originated by a fire: https:\/\/twitter.com\/StanfordAILab\/status\/1539472302399623170?s=20&t=1HU1hrtaXprtn14U61P55w\r\n\r\nAs a temporary workaroud, you can load the IMDB dataset with this tweak:\r\n```python\r\nds = load_dataset(\"imdb\", revision=\"tmp-fix-imdb\")\r\n```\r\n"],"created_at":1655989372000,"updated_at":1655992025000,"closed_at":1655992024000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nimdb dataset not loading\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"imdb\")\r\n```\r\n\r\n\r\n## Expected results\r\n\r\n\r\n## Actual results\r\n```bash\r\n06\/23\/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and preparing it from source\r\n06\/23\/2022 14:46:34 - INFO - datasets.utils.file_utils - HEAD request to http:\/\/ai.stanford.edu\/~amaas\/data\/sentiment\/aclImdb_v1.tar.gz timed out, retrying... [1.0]\r\n.....\r\nConnectionError: Couldn't reach http:\/\/ai.stanford.edu\/~amaas\/data\/sentiment\/aclImdb_v1.tar.gz (ConnectTimeout(MaxRetryError(\"HTTPConnectionPool(host='ai.stanford.edu', port=80): Max retries exceeded with url: \/~amaas\/data\/sentiment\/aclImdb_v1.tar.gz (Caused by ConnectTimeoutError(, 'Connection to ai.stanford.edu timed out. (connect timeout=100)'))\")))\r\n```\r\n## Environment info\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4550\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4550\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4549","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4549\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4549\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4549\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4549","id":1282312975,"node_id":"I_kwDODunzps5MbosP","number":4549,"title":"FileNotFoundError when passing a data_file inside a directory starting with double underscores","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I have consistently experienced this bug on GitHub actions when bumping to `2.3.2`","We're working on a fix ;)"],"created_at":1655986764000,"updated_at":1656599898000,"closed_at":1656599898000,"author_association":"MEMBER","active_lock_reason":null,"body":"Bug experienced in the `accelerate` CI: https:\/\/github.com\/huggingface\/accelerate\/runs\/7016055148?check_suite_focus=true\r\n\r\nThis is related to https:\/\/github.com\/huggingface\/datasets\/pull\/4505 and the changes from https:\/\/github.com\/huggingface\/datasets\/pull\/4412","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4549\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4549\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4548","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4548\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4548\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4548\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4548","id":1282218096,"node_id":"I_kwDODunzps5MbRhw","number":4548,"title":"Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories\/do not have \"{split}_\" prefix","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` to additionally check for `metadata.json`, but feel free to comment if you have better ideas (I feel like we're reaching the limits of what the current implementation IMO, so we could think of a different way of resolving the data files if necessary)"],"created_at":1655981937000,"updated_at":1656584132000,"closed_at":1656584132000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored. \r\nThis happens when a directory is structured like as follows:\r\n```\r\ntrain\/\r\n file_1.jpg\r\n file_2.jpg\r\ntest\/\r\n file_3.jpg\r\n file_4.jpg\r\nmetadata.jsonl\r\n```\r\nor like as follows:\r\n```\r\ntrain_file_1.jpg\r\ntrain_file_2.jpg\r\ntest_file_3.jpg\r\ntest_file_4.jpg\r\nmetadata.jsonl\r\n```\r\nThe same for HF repos.\r\n\r\nbecause it's ignored by the patterns [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/data_files.py#L29)\r\n\r\n@lhoestq @mariosasko Do you think it's better to add this functionality in `data_files.py` or just specifically in imagefolder\/audiofolder code? In `data_files.py` would me more general but I don't know if there are any other cases when that might be needed.\r\n ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4548\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4548\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4547","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4547\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4547\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4547\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4547","id":1282160517,"node_id":"PR_kwDODunzps46Ot5u","number":4547,"title":"[CI] Fix some warnings","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR","good catch, I thought I resolved them all sorry","Alright it should be good now"],"created_at":1655979049000,"updated_at":1656425457000,"closed_at":1656424794000,"author_association":"MEMBER","active_lock_reason":null,"body":"There are some warnings in the CI that are annoying, I tried to remove most of them","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4547\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4547\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4547","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4547","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4547.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4547.patch","merged_at":1656424794000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4546","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4546\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4546\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4546\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4546","id":1282093288,"node_id":"PR_kwDODunzps46Oe_K","number":4546,"title":"[CI] fixing seqeval install in ci by pinning setuptools-scm","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655976277000,"updated_at":1655979856000,"closed_at":1655979224000,"author_association":"MEMBER","active_lock_reason":null,"body":"The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work.\r\n\r\nI fixed this by pinning the version of setuptools-scm in the circleci job\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4544","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4546\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4546\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4546","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4546","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4546.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4546.patch","merged_at":1655979224000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4545","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4545\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4545\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4545\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4545","id":1280899028,"node_id":"PR_kwDODunzps46KV-y","number":4545,"title":"Make DuplicateKeysError more user friendly [For Issue #2556]","user":{"login":"VijayKalmath","id":20517962,"node_id":"MDQ6VXNlcjIwNTE3OTYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20517962?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VijayKalmath","html_url":"https:\/\/github.com\/VijayKalmath","followers_url":"https:\/\/api.github.com\/users\/VijayKalmath\/followers","following_url":"https:\/\/api.github.com\/users\/VijayKalmath\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VijayKalmath\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VijayKalmath\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VijayKalmath\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VijayKalmath\/orgs","repos_url":"https:\/\/api.github.com\/users\/VijayKalmath\/repos","events_url":"https:\/\/api.github.com\/users\/VijayKalmath\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VijayKalmath\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Nice thanks !\r\n> \r\n> After your changes feel free to mark this PR as \"ready for review\" ;)\r\n\r\nMarking PR ready for review.\r\n\r\n@lhoestq Let me know if there is anything else required or if we are good to go ahead and merge.","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655931694000,"updated_at":1656409026000,"closed_at":1656408364000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"# What does this PR do?\r\n\r\n## Summary\r\n\r\n*DuplicateKeysError error does not provide any information regarding the examples which have the same the key.*\r\n\r\n*This information is very helpful for debugging the dataset generator script.*\r\n\r\n## Additions\r\n-\r\n\r\n## Changes\r\n- Changed `DuplicateKeysError Class` in `src\/datasets\/keyhash.py` to add current index and duplicate_key_indices to error message.\r\n- Changed `check_duplicate_keys` function in `src\/datasets\/arrow_writer.py` to find indices of examples with duplicate hash if duplicate keys are found.\r\n\r\n## Deletions\r\n-\r\n\r\n## To do : \r\n- [x] Find way to find and print path `` in Error message \r\n\r\n## Issues Addressed : \r\n\r\nFixes #2556 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4545\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4545\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4545","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4545","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4545.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4545.patch","merged_at":1656408364000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4544","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4544\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4544\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4544\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4544","id":1280500340,"node_id":"I_kwDODunzps5MUuJ0","number":4544,"title":"[CI] seqeval installation fails sometimes on python 3.6","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1655915723000,"updated_at":1655979224000,"closed_at":1655979224000,"author_association":"MEMBER","active_lock_reason":null,"body":"The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail.\r\n\r\nThe installation fails because of this error:\r\n```\r\nCollecting seqeval\r\n Downloading seqeval-1.2.2.tar.gz (43 kB)\r\n\r\n\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258c | 10 kB 42.1 MB\/s eta 0:00:01\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 20 kB 53.3 MB\/s eta 0:00:01\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258c | 30 kB 67.2 MB\/s eta 0:00:01\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 40 kB 76.1 MB\/s eta 0:00:01\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 43 kB 10.0 MB\/s \r\n Preparing metadata (setup.py) ... -\b \berror\r\n ERROR: Command errored out with exit status 1:\r\n command: \/home\/circleci\/.pyenv\/versions\/3.6.15\/bin\/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '\"'\"'\/tmp\/pip-install-1l96tbyj\/seqeval_b31086f711d84743abe6905d2aa9dade\/setup.py'\"'\"'; __file__='\"'\"'\/tmp\/pip-install-1l96tbyj\/seqeval_b31086f711d84743abe6905d2aa9dade\/setup.py'\"'\"';f = getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__) if os.path.exists(__file__) else io.StringIO('\"'\"'from setuptools import setup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' egg_info --egg-base \/tmp\/pip-pip-egg-info-pf54_vqy\r\n cwd: \/tmp\/pip-install-1l96tbyj\/seqeval_b31086f711d84743abe6905d2aa9dade\/\r\n Complete output (22 lines):\r\n Traceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/tmp\/pip-install-1l96tbyj\/seqeval_b31086f711d84743abe6905d2aa9dade\/setup.py\", line 56, in \r\n 'Programming Language :: Python :: Implementation :: PyPy'\r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/setuptools\/__init__.py\", line 143, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/distutils\/core.py\", line 108, in setup\r\n _setup_distribution = dist = klass(attrs)\r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/setuptools\/dist.py\", line 442, in __init__\r\n k: v for k, v in attrs.items()\r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/distutils\/dist.py\", line 281, in __init__\r\n self.finalize_options()\r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/setuptools\/dist.py\", line 601, in finalize_options\r\n ep.load()(self, ep.name, value)\r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/pkg_resources\/__init__.py\", line 2346, in load\r\n return self.resolve()\r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/pkg_resources\/__init__.py\", line 2352, in resolve\r\n module = __import__(self.module_name, fromlist=['__name__'], level=0)\r\n File \"\/tmp\/pip-install-1l96tbyj\/seqeval_b31086f711d84743abe6905d2aa9dade\/.eggs\/setuptools_scm-7.0.2-py3.6.egg\/setuptools_scm\/__init__.py\", line 5\r\n from __future__ import annotations\r\n ^\r\n SyntaxError: future feature annotations is not defined\r\n ----------------------------------------\r\nWARNING: Discarding https:\/\/files.pythonhosted.org\/packages\/9d\/2d\/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d\/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https:\/\/pypi.org\/simple\/seqeval\/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.\r\n\r\n```\r\n\r\nfor example in https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/12665\/workflows\/93878eb9-a923-4b35-b2e7-c5e9b22f10ad\/jobs\/75300\r\n\r\nHere is a diff of the pip install logs until the error is reached: https:\/\/www.diffchecker.com\/VkQDLeQT\r\n\r\nThis could be caused by the latest updates of setuptools-scm","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4544\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4544\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4543","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4543\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4543\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4543\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4543","id":1280379781,"node_id":"PR_kwDODunzps46IiEp","number":4543,"title":"[CI] Fix upstream hub test url","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Remaining CI failures are unrelated to this fix, merging"],"created_at":1655912067000,"updated_at":1655915860000,"closed_at":1655915257000,"author_association":"MEMBER","active_lock_reason":null,"body":"Some tests were still using moon-stagign instead of hub-ci.\r\n\r\nI also updated the token to use one dedicated to `datasets`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4543\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4543\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4543","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4543","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4543.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4543.patch","merged_at":1655915257000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4542","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4542\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4542\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4542\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4542","id":1280269445,"node_id":"I_kwDODunzps5MT1yF","number":4542,"title":"[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ","cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!","Noted and I will look into the thread in detail tomorrow once I log back in. ","@lhoestq I have used TFRecords with `tf.data` for both vision and text and I can say that they are quite performant. I haven't worked with Feather yet as similarly as I have with TFRecords. If you haven't started the benchmarking script yet, I can prepare a Colab notebook that loads Feather files, converts them into a `tf.data` pipeline, and does some basic preprocessing. \r\n\r\nBut in my limited understanding, Feather might be better suited for CSV files. Not yet sure if it's good for modalities like images. ","> Not yet sure if it's good for modalities like images.\r\n\r\nWe store images pretty much the same way as tensorflow_datasets (i.e. storing the encoded image bytes, or a path to the local image, so that the image can be decoded on-the-fly), so as long as we use something similar as TFDS for image decoding it should be ok","So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly? But it introduces an I\/O redundancy of having to read the images every time.\r\n\r\nWith caching it could be somewhat mitigated but it's not a good solution for bigger image datasets. ","> So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly?\r\n\r\nhopefully yes :) \r\n\r\nI double-checked the TFDS source code and they always save the bytes actually, not the path. Anyway we'll see if we run into issues or not (as a first step we can require the bytes to be in the feather file)","Yes. For images, TFDS actually prepares TFRecords first for encoding and then reuses them for every subsequent call. ","@lhoestq @Rocketknight1 I worked on [this PoC](https:\/\/gist.github.com\/sayakpaul\/f7d5cc312cd01cb31098fad3fd9c6b59) that\r\n\r\n* Creates Feather files from a medium resolution dataset (`tf_flowers`).\r\n* Explores different options with TensorFlow IO to load the Feather files. \r\n\r\nI haven't benchmarked those different options yet. There's also a gotcha that I have noted in the PoC. I hope it gets us started but I'm sorry if this is redundant. ","Cool thanks ! If I understand correctly in your PoC you store the flattened array of pixels in the feather file. This will take a lot of disk space.\r\n\r\nMaybe we could just save the encoded bytes and let users apply a `map` to decode\/transform them into the format they need for training ? Users can use tf.image to do so for example","@lhoestq this is what I tried:\r\n\r\n```py\r\ndef read_image(path):\r\n with open(path, \"rb\") as f:\r\n return f.read()\r\n\r\n\r\ntotal_images_written = 0\r\n\r\nfor step in tqdm.tnrange(int(math.ceil(len(image_paths) \/ batch_size))):\r\n batch_image_paths = image_paths[step * batch_size : (step + 1) * batch_size]\r\n batch_image_labels = all_integer_labels[step * batch_size : (step + 1) * batch_size]\r\n\r\n data = [read_image(path) for path in batch_image_paths]\r\n table = pa.Table.from_arrays([data, batch_image_labels], [\"data\", \"labels\"])\r\n write_feather(table, f\"\/tmp\/flowers_feather_{step}.feather\", chunksize=chunk_size)\r\n total_images_written += len(batch_image_paths)\r\n print(f\"Total images written: {total_images_written}.\")\r\n\r\n del data\r\n```\r\n\r\nI got the feather files done (no resizing required as you can see):\r\n\r\n```sh\r\nls -lh \/tmp\/*.feather\r\n\r\n-rw-r--r-- 1 sayakpaul wheel 64M Jun 24 09:28 \/tmp\/flowers_feather_0.feather\r\n-rw-r--r-- 1 sayakpaul wheel 59M Jun 24 09:28 \/tmp\/flowers_feather_1.feather\r\n-rw-r--r-- 1 sayakpaul wheel 51M Jun 24 09:28 \/tmp\/flowers_feather_2.feather\r\n-rw-r--r-- 1 sayakpaul wheel 45M Jun 24 09:28 \/tmp\/flowers_feather_3.feather\r\n```\r\n\r\nNow there seems to be a problem with `tfio.arrow`:\r\n\r\n```py\r\nimport tensorflow_io.arrow as arrow_io\r\n\r\n\r\ndataset = arrow_io.ArrowFeatherDataset(\r\n [\"\/tmp\/flowers_feather_0.feather\"],\r\n columns=(0, 1),\r\n output_types=(tf.string, tf.int64),\r\n output_shapes=([], []),\r\n batch_mode=\"auto\",\r\n)\r\n\r\nprint(dataset.element_spec) \r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(TensorSpec(shape=(None,), dtype=tf.string, name=None),\r\n TensorSpec(shape=(None,), dtype=tf.int64, name=None))\r\n```\r\n\r\nBut when I do `sample = next(iter(dataset))` it goes into:\r\n\r\n```py\r\nInternalError Traceback (most recent call last)\r\nInput In [30], in ()\r\n----> 1 sample = next(iter(dataset))\r\n\r\nFile ~\/.local\/bin\/.virtualenvs\/jax\/lib\/python3.8\/site-packages\/tensorflow\/python\/data\/ops\/iterator_ops.py:766, in OwnedIterator.__next__(self)\r\n 764 def __next__(self):\r\n 765 try:\r\n--> 766 return self._next_internal()\r\n 767 except errors.OutOfRangeError:\r\n 768 raise StopIteration\r\n\r\nFile ~\/.local\/bin\/.virtualenvs\/jax\/lib\/python3.8\/site-packages\/tensorflow\/python\/data\/ops\/iterator_ops.py:749, in OwnedIterator._next_internal(self)\r\n 746 # TODO(b\/77291417): This runs in sync mode as iterators use an error status\r\n 747 # to communicate that there is no more data to iterate over.\r\n 748 with context.execution_mode(context.SYNC):\r\n--> 749 ret = gen_dataset_ops.iterator_get_next(\r\n 750 self._iterator_resource,\r\n 751 output_types=self._flat_output_types,\r\n 752 output_shapes=self._flat_output_shapes)\r\n 754 try:\r\n 755 # Fast path for the case `self._structure` is not a nested structure.\r\n 756 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access\r\n\r\nFile ~\/.local\/bin\/.virtualenvs\/jax\/lib\/python3.8\/site-packages\/tensorflow\/python\/ops\/gen_dataset_ops.py:3017, in iterator_get_next(iterator, output_types, output_shapes, name)\r\n 3015 return _result\r\n 3016 except _core._NotOkStatusException as e:\r\n-> 3017 _ops.raise_from_not_ok_status(e, name)\r\n 3018 except _core._FallbackException:\r\n 3019 pass\r\n\r\nFile ~\/.local\/bin\/.virtualenvs\/jax\/lib\/python3.8\/site-packages\/tensorflow\/python\/framework\/ops.py:7164, in raise_from_not_ok_status(e, name)\r\n 7162 def raise_from_not_ok_status(e, name):\r\n 7163 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7164 raise core._status_to_exception(e) from None\r\n\r\nInternalError: Invalid: INVALID_ARGUMENT: arrow data type 0x7ff9899d8038 is not supported: Type error: Arrow data type is not supported [Op:IteratorGetNext]\r\n```\r\n\r\nSome additional notes:\r\n\r\n* I can actually decode an image encoded with `read_image()` (shown earlier):\r\n\r\n ```py\r\n sample_image_path = image_paths[0]\r\n encoded_image = read_image(sample_image_path)\r\n image = tf.image.decode_png(encoded_image, 3)\r\n print(image.shape)\r\n ```\r\n\r\n* If the above `tf.data.Dataset` object would have succeeded my plan was to just map the decoder like so:\r\n\r\n ```py\r\n autotune = tf.data.AUTOTUNE\r\n dataset = dataset.map(lambda x, y: (tf.image.decode_png(x, 3), y), num_parallel_calls=autotune)\r\n ```","@lhoestq I think I was able to make it work in the way you were envisioning. Here's the PoC:\r\nhttps:\/\/gist.github.com\/sayakpaul\/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb\r\n\r\nSome details:\r\n\r\n* I am currently serializing the images as strings with `base64`). In comparison to the flattened arrays as before, the size of the individual feather files has reduced (144 MB -> 85 MB, largest).\r\n* When decoding, I am first decoding the base64 string and then decoding that string (with `tf.io.decode_base64`) as an image with `tf.image.decode_png()`. \r\n* The entire workflow (from generating the Feather files to loading them and preparing the batched `tf.data` pipeline) involves the following libraries: `pyarraow`, `tensorflow-io`, and `tensorflow`. \r\n\r\nCc: @Rocketknight1 @gante ","Cool thanks ! Too bad the Arrow binary type doesn't seem to be supported in `arrow_io.ArrowFeatherDataset` :\/ We would also need it to support Arrow struct type. Indeed images in `datasets` are represented using an Arrow type\r\n```python\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n```\r\nnot sure yet how hard it is to support this though.\r\n\r\nChanging the typing on our side would create concerning breaking changes, that's why it would be awesome if it could work using these types","If the ArrowFeatherDataset doesn't yet support it, I guess our hands are a bit tied at the moment. \r\n\r\nIIUC, in my [latest PoC notebook](https:\/\/gist.github.com\/sayakpaul\/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n\r\n```\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n``` \r\n\r\nIn that case, `pa.binary()` isn't yet supported.","> IIUC, in my [latest PoC notebook](https:\/\/gist.github.com\/sayakpaul\/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n> \r\n> pa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n\r\nYea because that's the data format we're using. If we were to use base64, then we would have to process the full dataset to convert it, which can take some time. Converting to TFRecords would be simpler than converting to base64 in Feather files.\r\n\r\nMaybe it would take too much time to be worth exploring, but according to https:\/\/github.com\/tensorflow\/io\/issues\/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset. What do you think ? Any other alternative in mind ?","> Maybe it would take too much time to be worth exploring, but according to https:\/\/github.com\/tensorflow\/io\/issues\/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset.\r\n\r\nShould be possible as per the comment but there hasn't been any progress and it's been more than a year. \r\n\r\n> If we were to use base64, then we would have to process the full dataset to convert it, which can take some time.\r\n\r\nI don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from. \r\n\r\n> What do you think ? Any other alternative in mind ?\r\n\r\nTFRecords since the TensorFlow ecosystem has developed good support for it over the years. ","> I don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from.\r\n\r\nUsers already have a copy of the dataset in Arrow format (we can change this to Feather). So to load the Arrow\/feather files to a TF dataset we need TF IO or something like that. Otherwise the user has to convert all the files from Arrow to TFRecords to use TF data efficiently. But the conversion needs resources: CPU, disk, time. Converting the images to base64 require the same sort of resources.\r\n\r\nSo the issue we're trying to tackle is how to load the Arrow data in TF without having to convert anything ^^","Yeah, it looks like in its current state the tfio support for `Feather` is incomplete, so we'd end up having to write a lot of it, or do a conversion that defeats the whole point (because if we're going to convert the whole dataset we might as well convert to `TFRecord`).","Understood @lhoestq. Thanks for explaining!\r\n\r\nAgreed with @Rocketknight1. ","@lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and\/or load samples with multiprocessing to improve throughput. Do you have any preferences there?","> @lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and\/or load samples with multiprocessing to improve throughput. Do you have any preferences there?\r\n\r\nHappy to take part there @Rocketknight1.","If `to_tf_dataset` can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?","@lhoestq why one would convert to TFRecords after unbatching? ","> If to_tf_dataset can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?\r\n\r\nSort of! A `tf.data.Dataset` is more like an iterator, and does not support sample indexing. `to_tf_dataset()` creates an iterator, but to convert that to `TFRecord`, the user would have to iterate over the whole thing and manually save the stream of samples to files. ","Someone would like to try to dive into tfio to fix this ? Sounds like a good opportunity to learn what are the best ways to load a dataset for TF, and also the connections between Arrow and TF.\r\n\r\nIf we can at least have the Arrow `binary` type working for TF that would be awesome already (issue https:\/\/github.com\/tensorflow\/io\/issues\/1361)\r\n\r\nalso cc @nateraw in case you'd be interested ;)","> Sounds like a good opportunity to learn what are the best ways to load a dataset for TF\r\n\r\nThe recommended way would likely be a combination of TFRecords and `tf.data`. \r\n\r\nExploring the connection between Arrow and TensorFlow is definitely worth pursuing though. But I am not sure about the implications of storing images in a format supported by Arrow. I guess we'll know more once we have at least figured out the support for `binary` type for TFIO. I will spend some time on it and keep this thread updated. ","I am currently working on a fine-tuning notebook for the TFSegFormer model (Semantic Segmentation). The resolution is high for both the input images and the labels - (512, 512, 3). Here's the [Colab Notebook](https:\/\/colab.research.google.com\/drive\/1jAtR7Z0lYX6m6JsDI5VByh5vFaNhHIbP?usp=sharing) (it's a WIP so please bear that in mind).\r\n\r\nI think the current implementation of `to_tf_dataset()` does create a bottleneck here since the GPU utilization is quite low. ","Here's a notebook showing the performance difference: https:\/\/colab.research.google.com\/gist\/sayakpaul\/d7ca67c90beb47e354942c9d8c0bd8ef\/scratchpad.ipynb. \r\n\r\nNote that I acknowledge that it's not an apples-to-apples comparison in many aspects (the dataset isn't the same, data serialization format isn't the same, etc.) but this is the best I could do. ","Thanks ! I think the speed difference can be partly explained: you use ds.shuffle in your dataset, which is an exact shuffling (compared to TFDS which does buffer shuffling): it slows down query time by 2x to 10x since it has to play with data that are not contiguous.\r\n\r\nThe rest of the speed difference seems to be caused by image decoding (from 330\u00b5s\/image to 30ms\/image)","Fair enough. Can do one without shuffling too. But it's an important one to consider I guess. "],"created_at":1655908920000,"updated_at":1661430028000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.\r\n\r\nIt seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https:\/\/www.tensorflow.org\/io\/api_docs\/python\/tfio\/arrow\/ArrowFeatherDataset\r\n\r\nFeather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library.\r\n\r\nHere are a few points to explore\r\n- [ ] check the performance of ArrowFeatherDataset in tf.data\r\n- [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc.\r\n\r\nWe would also need to implement sharding when loading a dataset (this will be done anyway for #546)\r\n\r\ncc @Rocketknight1 @gante feel free to comment in case I missed anything !\r\n\r\nI'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4542\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4542\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4541","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4541\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4541\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4541\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4541","id":1280161436,"node_id":"PR_kwDODunzps46HyPK","number":4541,"title":"Fix timestamp conversion from Pandas to Python datetime in streaming mode","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","CI failures are unrelated to this PR, merging"],"created_at":1655905201000,"updated_at":1655915967000,"closed_at":1655915349000,"author_association":"MEMBER","active_lock_reason":null,"body":"Arrow accepts both pd.Timestamp and datetime.datetime objects to create timestamp arrays.\r\nHowever a timestamp array is always converted to datetime.datetime objects.\r\n\r\nThis created an inconsistency between streaming in non-streaming. e.g. the `ett` dataset outputs datetime.datetime objects in non-streaming but pd.timestamp in streaming.\r\n\r\nI fixed this by always converting pd.Timestamp to datetime.datetime during the example encoding step.\r\nI fixed the same issue for pd.Timedelta as well. Finally I added an extra step of conversion for Series and DataFrame to take this into account in case such data are passed as Series or DataFrame.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4533\r\nRelated to https:\/\/github.com\/huggingface\/datasets-server\/issues\/397","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4541\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4541\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4541","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4541","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4541.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4541.patch","merged_at":1655915349000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4540","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4540\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4540\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4540\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4540","id":1280142942,"node_id":"I_kwDODunzps5MTW5e","number":4540,"title":"Avoid splitting by` .py` for the file.","user":{"login":"espoirMur","id":18573157,"node_id":"MDQ6VXNlcjE4NTczMTU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18573157?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/espoirMur","html_url":"https:\/\/github.com\/espoirMur","followers_url":"https:\/\/api.github.com\/users\/espoirMur\/followers","following_url":"https:\/\/api.github.com\/users\/espoirMur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/espoirMur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/espoirMur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/espoirMur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/espoirMur\/orgs","repos_url":"https:\/\/api.github.com\/users\/espoirMur\/repos","events_url":"https:\/\/api.github.com\/users\/espoirMur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/espoirMur\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":{"login":"VijayKalmath","id":20517962,"node_id":"MDQ6VXNlcjIwNTE3OTYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20517962?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VijayKalmath","html_url":"https:\/\/github.com\/VijayKalmath","followers_url":"https:\/\/api.github.com\/users\/VijayKalmath\/followers","following_url":"https:\/\/api.github.com\/users\/VijayKalmath\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VijayKalmath\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VijayKalmath\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VijayKalmath\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VijayKalmath\/orgs","repos_url":"https:\/\/api.github.com\/users\/VijayKalmath\/repos","events_url":"https:\/\/api.github.com\/users\/VijayKalmath\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VijayKalmath\/received_events","type":"User","site_admin":false},"assignees":[{"login":"VijayKalmath","id":20517962,"node_id":"MDQ6VXNlcjIwNTE3OTYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20517962?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VijayKalmath","html_url":"https:\/\/github.com\/VijayKalmath","followers_url":"https:\/\/api.github.com\/users\/VijayKalmath\/followers","following_url":"https:\/\/api.github.com\/users\/VijayKalmath\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VijayKalmath\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VijayKalmath\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VijayKalmath\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VijayKalmath\/orgs","repos_url":"https:\/\/api.github.com\/users\/VijayKalmath\/repos","events_url":"https:\/\/api.github.com\/users\/VijayKalmath\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VijayKalmath\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @espoirMur, thanks for reporting.\r\n\r\nYou are right: that code line could be improved and made more generically valid.\r\n\r\nOn the other hand, I would suggest using `os.path.splitext` instead.\r\n\r\nAre you willing to open a PR? :)","I will have a look.. \r\n\r\nThis weekend .. ","@albertvillanova , Can you have a look at #4590. \r\n\r\nThanks ","#self-assign"],"created_at":1655904415000,"updated_at":1657199864000,"closed_at":1657199864000,"author_association":"NONE","active_lock_reason":null,"body":"https:\/\/github.com\/huggingface\/datasets\/blob\/90b3a98065556fc66380cafd780af9b1814b9426\/src\/datasets\/load.py#L272\r\n\r\n\r\nHello, \r\nThanks you for this library . \r\n\r\n I was using it and I had one edge case. my home folder name ends with `.py` it is `\/home\/espoir.py` so anytime I am running the code to load a local module this code here it is failing because after splitting it is trying to save the code to my home directory.\r\n\r\n\r\nStep to reproduce.\r\n\r\n- If you have a home folder which ends with `.py`\r\n\r\n- load a module with a local folder \r\n`qa_dataset = load_dataset(\"src\/data\/build_qa_dataset.py\")`\r\nit is failed \r\nA possible workaround would be to use pathlib at the mentioned line\r\n\r\n` meta_path = Path(importable_local_file).parent.joinpath(\"metadata.json\")` this can alivate the issue .\r\n\r\nLet me what are your thought on this and I can try to fix it by A PR.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4540\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4540\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4539","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4539\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4539\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4539\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4539","id":1279779829,"node_id":"PR_kwDODunzps46GfWv","number":4539,"title":"Replace deprecated logging.warn with logging.warning","user":{"login":"hugovk","id":1324225,"node_id":"MDQ6VXNlcjEzMjQyMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1324225?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hugovk","html_url":"https:\/\/github.com\/hugovk","followers_url":"https:\/\/api.github.com\/users\/hugovk\/followers","following_url":"https:\/\/api.github.com\/users\/hugovk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hugovk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hugovk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hugovk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hugovk\/orgs","repos_url":"https:\/\/api.github.com\/users\/hugovk\/repos","events_url":"https:\/\/api.github.com\/users\/hugovk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hugovk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1655886749000,"updated_at":1655905403000,"closed_at":1655902311000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Replace `logging.warn` (deprecated in [Python 2.7, 2011](https:\/\/github.com\/python\/cpython\/commit\/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https:\/\/github.com\/python\/cpython\/commit\/6fa635df7aa88ae9fd8b41ae42743341316c90f7)).\r\n\r\n* https:\/\/docs.python.org\/3\/library\/logging.html#logging.Logger.warning\r\n* https:\/\/github.com\/python\/cpython\/issues\/57444\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4539\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4539\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4539","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4539","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4539.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4539.patch","merged_at":1655902311000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4538","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4538\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4538\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4538\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4538","id":1279409786,"node_id":"I_kwDODunzps5MQj56","number":4538,"title":"Dataset Viewer issue for Pile of Law","user":{"login":"Breakend","id":1609857,"node_id":"MDQ6VXNlcjE2MDk4NTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1609857?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Breakend","html_url":"https:\/\/github.com\/Breakend","followers_url":"https:\/\/api.github.com\/users\/Breakend\/followers","following_url":"https:\/\/api.github.com\/users\/Breakend\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Breakend\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Breakend\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Breakend\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Breakend\/orgs","repos_url":"https:\/\/api.github.com\/users\/Breakend\/repos","events_url":"https:\/\/api.github.com\/users\/Breakend\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Breakend\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @Breakend, yes \u2013 we'll propose a solution today","Thanks so much, I appreciate it!","Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!","Awesome! Thanks for confirming. cc @severo ","Just for the record:\r\n\r\n- the doc\r\n \r\n\"Capture\r\n\r\n- the dataset main page\r\n\r\n\"Capture\r\n\r\n- the dataset viewer page\r\n\r\n\"Capture\r\n"],"created_at":1655866120000,"updated_at":1656315023000,"closed_at":1656282382000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\r\n\r\nhttps:\/\/huggingface.co\/datasets\/pile-of-law\/pile-of-law\r\n\r\n### Description\r\n\r\nHi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests\/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information?\r\n\r\nThanks so much! \r\n\r\n### Owner\r\n\r\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4538\/reactions","total_count":3,"+1":3,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4538\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4537","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4537\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4537\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4537\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4537","id":1279144310,"node_id":"PR_kwDODunzps46ESJn","number":4537,"title":"Fix WMT dataset loading issue and docs update","user":{"login":"khushmeeet","id":8711912,"node_id":"MDQ6VXNlcjg3MTE5MTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8711912?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/khushmeeet","html_url":"https:\/\/github.com\/khushmeeet","followers_url":"https:\/\/api.github.com\/users\/khushmeeet\/followers","following_url":"https:\/\/api.github.com\/users\/khushmeeet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/khushmeeet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/khushmeeet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/khushmeeet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/khushmeeet\/orgs","repos_url":"https:\/\/api.github.com\/users\/khushmeeet\/repos","events_url":"https:\/\/api.github.com\/users\/khushmeeet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/khushmeeet\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The PR branch now has some commits unrelated to the changes, probably due to rebasing. Can you please close this PR and open a new one from a new branch? You can use `git cherry-pick` to preserve the relevant changes:\r\n```bash\r\ngit checkout master\r\ngit remote add upstream git@github.com:huggingface\/datasets.git\r\ngit pull --ff-only upstream master\r\ngit checkout -b wmt-datasets-fix2\r\ngit cherry-pick f2d6c995d5153131168f64fc60fe33a7813739a4 a9fdead5f435aeb88c237600be28eb8d4fde4c55\r\n```","Closing this PR due to unwanted commit changes. Will be opening new PR for the same issue."],"created_at":1655848082000,"updated_at":1656054343000,"closed_at":1656054310000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR is a fix for #4354 \r\n\r\nChanges are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.\r\n\r\nAs I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e \".[dev]\"`. Issue is with `tensorflow-text` not supported on M1s and there is no supporting repo by Apple or Google. So, if I was needed to perform local testing, I am not able to do that.\r\n\r\nLet me know, if any additional changes are required.\r\n\r\nThanks","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4537\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4537\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4537","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4537","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4537.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4537.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4536","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4536\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4536\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4536\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4536","id":1278734727,"node_id":"PR_kwDODunzps46C2z6","number":4536,"title":"Properly raise FileNotFound even if the dataset is private","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655831150000,"updated_at":1656413211000,"closed_at":1656412570000,"author_association":"MEMBER","active_lock_reason":null,"body":"`tests\/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub.\r\n\r\nMoreover when use_auth_token is not set (default is False), we should not pass `token=None` to HfApi.dataset_info, or it will use the local token by default - instead it should use no token. It's currently not possible to ask for no token to be used, so as a workaround I simply set token=\"no-token\"","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4536\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4536\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4536","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4536","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4536.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4536.patch","merged_at":1656412570000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4535","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4535\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4535\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4535\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4535","id":1278365039,"node_id":"PR_kwDODunzps46BnXq","number":4535,"title":"Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays`","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also, I had a doubt while checking the code related to the indices... \r\n\r\n@lhoestq, there's a value in `config.py` named `DATASET_INDICES_FILENAME` which has the arrow extension (which I assume it should be `indices.faiss`, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an `ArrowDataset` in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/aec86ea4b790ccccc9b2e0376a496728b1c914cc\/src\/datasets\/config.py#L183\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/aec86ea4b790ccccc9b2e0376a496728b1c914cc\/src\/datasets\/arrow_dataset.py#L1079-L1092\r\n\r\nSo should I also remove that?\r\n\r\nP.S. I also edited the following code comment which I found misleading as it's not actually storing the indices.\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/8ddc4bbeb1e2bd307b21f5d21f884649aa2bf640\/src\/datasets\/arrow_dataset.py#L1122","_The documentation is not available anymore as the PR was closed or merged._","> @lhoestq, there's a value in config.py named DATASET_INDICES_FILENAME which has the arrow extension (which I assume it should be indices.faiss, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an ArrowDataset in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nThe arrow file is used to store an indices mapping (when you shuffle the dataset for example) - not for a faiss index ;)","Ok cool thanks a lot for the explanation @lhoestq I was not sure about that :+1: I'll also add it there as you suggested!","CI failures are unrelated to this PR and fixed on master, merging"],"created_at":1655813929000,"updated_at":1656347109000,"closed_at":1656346476000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR adds the `batch_size` parameter to both `ArrowDataset.add_faiss_index` and `ArrowDataset.add_faiss_index_from_external_arrays`.\r\n\r\nThis is useful so as to tweak the `batch_size` according to the VM specifications.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4535\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4535\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4535","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4535","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4535.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4535.patch","merged_at":1656346476000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4534","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4534\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4534\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4534\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4534","id":1277897197,"node_id":"PR_kwDODunzps46AFK_","number":4534,"title":"Add `tldr_news` dataset","user":{"login":"JulesBelveze","id":32683010,"node_id":"MDQ6VXNlcjMyNjgzMDEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32683010?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JulesBelveze","html_url":"https:\/\/github.com\/JulesBelveze","followers_url":"https:\/\/api.github.com\/users\/JulesBelveze\/followers","following_url":"https:\/\/api.github.com\/users\/JulesBelveze\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JulesBelveze\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JulesBelveze\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JulesBelveze\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JulesBelveze\/orgs","repos_url":"https:\/\/api.github.com\/users\/JulesBelveze\/repos","events_url":"https:\/\/api.github.com\/users\/JulesBelveze\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JulesBelveze\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey @lhoestq, \r\nSorry for opening a PR, I was following the guide [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md)! Thanks for the review anyway, I will follow the instructions you sent \ud83d\ude03 ","Thanks, we will update the guide ;)"],"created_at":1655787763000,"updated_at":1655994834000,"closed_at":1655821271000,"author_association":"NONE","active_lock_reason":null,"body":"This PR aims at adding support for a news dataset: `tldr news`.\r\n\r\nThis dataset is based on the daily [tldr tech newsletter](https:\/\/tldr.tech\/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4534\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4534\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4534","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4534","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4534.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4534.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4533","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4533\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4533\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4533\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4533","id":1277211490,"node_id":"I_kwDODunzps5MILNi","number":4533,"title":"Timestamp not returned as datetime objects in streaming mode","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1655746127000,"updated_at":1655915349000,"closed_at":1655915349000,"author_association":"MEMBER","active_lock_reason":null,"body":"As reported in (internal) https:\/\/github.com\/huggingface\/datasets-server\/issues\/397\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"ett\", name=\"h2\", split=\"test\", streaming=True)\r\n>>> d = next(iter(dataset))\r\n>>> d['start']\r\nTimestamp('2016-07-01 00:00:00')\r\n```\r\n\r\nwhile loading in non-streaming mode it returns `datetime.datetime(2016, 7, 1, 0, 0)`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4533\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4533\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4532","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4532\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4532\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4532\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4532","id":1277167129,"node_id":"PR_kwDODunzps459kB7","number":4532,"title":"Add Video feature","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4532). All of your documentation changes will be reflected on that endpoint."],"created_at":1655743001000,"updated_at":1657120794000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The following adds a `Video` feature for encoding\/decoding videos on the fly from in memory bytes. It uses my own `encoded-video` library which is basically `pytorchvideo`'s encoded video but with all the `torch` specific stuff stripped out. Because of that, and because the tool I used under the hood is not very mature, I leave this as a draft idea that we can use to build off of.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4532\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4532\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4532","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4532","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4532.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4532.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4531","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4531\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4531\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4531\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4531","id":1277054172,"node_id":"I_kwDODunzps5MHkzc","number":4531,"title":"Dataset Viewer issue for CSV datasets","user":{"login":"merveenoyan","id":53175384,"node_id":"MDQ6VXNlcjUzMTc1Mzg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53175384?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/merveenoyan","html_url":"https:\/\/github.com\/merveenoyan","followers_url":"https:\/\/api.github.com\/users\/merveenoyan\/followers","following_url":"https:\/\/api.github.com\/users\/merveenoyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/merveenoyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/merveenoyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/merveenoyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/merveenoyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/merveenoyan\/repos","events_url":"https:\/\/api.github.com\/users\/merveenoyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/merveenoyan\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["this should now be fixed","Confirmed, it's fixed now. Thanks for reporting, and thanks @coyotte508 for fixing it\r\n\r\n\"Capture\r\n"],"created_at":1655736984000,"updated_at":1655800126000,"closed_at":1655800107000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/scikit-learn\/breast-cancer-wisconsin\n\n### Description\n\nI'm populating CSV datasets [here](https:\/\/huggingface.co\/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well. \r\n\r\nYou can replicate the problem by simply uploading any CSV dataset.\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4531\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4531\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4530","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4530\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4530\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4530\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4530","id":1276884962,"node_id":"PR_kwDODunzps458n_S","number":4530,"title":"Add AudioFolder packaged loader","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"assignees":[{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq @mariosasko I don't know what to do with the test, do you have any ideas? :)","also it's passed in `pyarrow_latest_WIN`","If the error only happens on 3.6, maybe #4460 can help ^^' It seems to work in 3.7 on the windows CI\r\n\r\n> inferring labels is not the default behavior (drop_labels is set to True in config)\r\n\r\nI think it a missed opportunity to have a consistent API between imagefolder and audiofolder, since they do everything the same way. Can you give more details why you think we should drop the labels by default ?","Considering audio classification in audio is not as common as image classification in image, I'm ok with having different config defaults as long as they are properly documented (check [Papers With Code](https:\/\/paperswithcode.com\/datasets) for stats and compare the classification numbers to the other tasks, do this for both modalities)\r\n\r\nAlso, WDYT about creating a generic folder loader that ImageFolder and AudioFolder then subclass to avoid having to update both of them when there is something to update\/fix?","@lhoestq I think it doesn't change the API itself, it just doesn't infer labels by default, but you can **still** set `drop_labels=False` to `load_dataset` and the labels will be inferred. \r\nSuppose that one has data structured as follows:\r\n```\r\ndata\/\r\n train\/\r\n audio\/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n test\/\r\n audio\/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n```\r\nIf users load this dataset with `load_dataset(\"audiofolder\", data_dir=\"data\")` (the most native way), they will get a `label` feature that will always be equal to 0 (= \"audio\"). To mitigate this, they will have to always specify `load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=True)` explicitly and I believe it's not convenient. \r\n\r\nAt the same time, `label` column can be added just as easy as adding one argument:` load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=False)`. As classification task is not as common, I think it should require more symbols to be added to the code :D \r\n\r\nBut this is definitely should be explained in the docs, which I've forgotten to update... I'll add this section soon.\r\n\r\nAlso +to the generic loader, will work on it. \r\n\r\n","If a metadata.jsonl file is present, then it doesn't have to infer the labels I agree. Note that this is already the case for imagefolder ;) in your case `load_dataset(\"audiofolder\", data_dir=\"data\")` won't return labels !\r\n\r\nLabels are only inferred if there are no metadata.jsonl","Feel free to merge the `main` branch into yours after updating your fork of `datasets`: https:\/\/github.com\/huggingface\/datasets\/issues\/4629\r\n\r\nThis should fix some errors in the CI","@mariosasko could you please review this PR again? :)\r\n\r\nmost of the tests for AutoFolder (base class for AudioFolder and ImageFolder) are now basically copied from Image\/AudioFolder (their tests are also almost identical too) and adapted to test other methods. it should be refactored but i think this is not that important for now and might be done in the future PR, wdyt?","@mariosasko thank you for the review! I'm sorry I accidentally asked for the review again, ignore it."],"created_at":1655729642000,"updated_at":1661179009000,"closed_at":1661178040000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"will close #3964\r\n\r\nAudioFolder is almost identical to ImageFolder except for inferring labels is not the default behavior (`drop_labels` is set to True in config), the option of inferring them is preserved though.\r\n\r\nThe weird thing is happening with the `test_data_files_with_metadata_and_archives` when `streaming` is `True`. Here is the log from the CI:\r\n```\r\n\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/datasets\/features\/audio.py:237: in _decode_non_mp3_path_like\r\n array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/librosa\/util\/decorators.py:88: in inner_f\r\n return f(*args, **kwargs)\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/librosa\/core\/audio.py:176: in load\r\n raise (exc)\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/librosa\/core\/audio.py:155: in load\r\n context = sf.SoundFile(path)\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/soundfile.py:629: in __init__\r\n self._file = self._open(file, mode_int, closefd)\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/soundfile.py:1184: in _open\r\n \"Error opening {0!r}: \".format(self.name))\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nerr = 72\r\nprefix = \"Error opening : \"\r\n\r\n def _error_check(err, prefix=\"\"):\r\n \"\"\"Pretty-print a numerical error code if there is an error.\"\"\"\r\n if err != 0:\r\n err_str = _snd.sf_error_number(err)\r\n> raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))\r\nE RuntimeError: Error opening : Error in WAV file. No 'data' chunk marker.\r\n```\r\nI hadn't been able to reproduce this locally until I created the same test environment (I mean with `pip install .[tests]`) with python3.6. The same env but with python3.8 passes the test! I didn't manage to figure out what's wrong, I also tried simply to replace the test wav file and still got the same error. Versions of `soundfile`, `librosa` and `libsndfile` are identical. Might it be something with zip compression? Sounds weird but I don't have any other ideas... \r\n \r\nTODO:\r\n\r\n- [x] align with #4622\r\n- [x] documentation\r\n- [x] tests for AutoFolder?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4530\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4530\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4530","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4530","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4530.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4530.patch","merged_at":1661178040000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4529","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4529\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4529\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4529\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4529","id":1276729303,"node_id":"I_kwDODunzps5MGVfX","number":4529,"title":"Ecoset","user":{"login":"DiGyt","id":34550289,"node_id":"MDQ6VXNlcjM0NTUwMjg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34550289?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DiGyt","html_url":"https:\/\/github.com\/DiGyt","followers_url":"https:\/\/api.github.com\/users\/DiGyt\/followers","following_url":"https:\/\/api.github.com\/users\/DiGyt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DiGyt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DiGyt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DiGyt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DiGyt\/orgs","repos_url":"https:\/\/api.github.com\/users\/DiGyt\/repos","events_url":"https:\/\/api.github.com\/users\/DiGyt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DiGyt\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Very cool dataset! I answered your questions on the forum. Also, feel free to comment `#self-assign` on this issue to self-assign it."],"created_at":1655721574000,"updated_at":1655828236000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Ecoset*\r\n- **Description:** *https:\/\/www.kietzmannlab.org\/ecoset\/*\r\n- **Paper:** *https:\/\/doi.org\/10.1073\/pnas.2011417118*\r\n- **Data:** *https:\/\/codeocean.com\/capsule\/9570390\/tree\/v1*\r\n- **Motivation:**\r\n\r\n**Ecoset** was created as a clean and ecologically valid alternative to **Imagenet**.\r\n\r\nIt is a large image recognition dataset, similar to Imagenet in size and structure. However, the authors of ecoset claim several improvements over Imagenet, like:\r\n- more ecologically valid classes (e.g. not over-focussed on distinguishing different dog breeds)\r\n- less NSFW content\r\n- 'pre-packed image recognition models' that come with the dataset and can be used for validation of other models.\r\n\r\n\r\nI am working for one of the authors of the paper with the aim of bringing Ecoset to huggingface datasets. Therefore I can work on this issue personally, but could use some help from devs and experienced users if the dataset is of interest to them. I phrased some of my questions on [discuss.huggingface](https:\/\/discuss.huggingface.co\/t\/handling-large-image-datasets\/19373).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4529\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4529\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4528","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4528\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4528\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4528\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4528","id":1276679155,"node_id":"I_kwDODunzps5MGJPz","number":4528,"title":"Memory leak when iterating a Dataset","user":{"login":"NouamaneTazi","id":29777165,"node_id":"MDQ6VXNlcjI5Nzc3MTY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29777165?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NouamaneTazi","html_url":"https:\/\/github.com\/NouamaneTazi","followers_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/followers","following_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/orgs","repos_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/repos","events_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Is someone assigned to this issue?","The same issue is being debugged here: https:\/\/github.com\/huggingface\/datasets\/issues\/4883\r\n","Here is a modified repro example that makes it easier to see the leak:\r\n\r\n```\r\n$ cat ds2.py\r\nimport gc, sys\r\nimport time\r\nfrom datasets import load_dataset\r\nimport os, psutil\r\n\r\nprocess = psutil.Process(os.getpid())\r\n\r\nprint(process.memory_info().rss\/2**20)\r\n\r\ncorpus = load_dataset(\"BeIR\/msmarco\", 'corpus', keep_in_memory=False, streaming=False)['corpus']\r\ncorpus = corpus.select(range(200000))\r\n\r\nprint(process.memory_info().rss\/2**20)\r\n\r\nbatch = None\r\n\r\nmem_before_start = psutil.Process(os.getpid()).memory_info().rss \/ 2**20\r\n\r\nstep = 20000\r\nfor i in range(0, 10*step, step):\r\n mem_before = psutil.Process(os.getpid()).memory_info().rss \/ 2**20\r\n batch = corpus[i:i+step]\r\n import objgraph\r\n #objgraph.show_refs([batch])\r\n #objgraph.show_refs([corpus])\r\n #sys.exit()\r\n gc.collect()\r\n\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss \/ 2**20\r\n print(f\"{i:6d} {mem_after - mem_before:12.4f} {mem_after - mem_before_start:12.4f}\")\r\n\r\n```\r\n\r\nLet's run:\r\n\r\n```\r\n$ python ds2.py\r\n 0 36.5391 36.5391\r\n 20000 10.4609 47.0000\r\n 40000 5.9766 52.9766\r\n 60000 7.8906 60.8672\r\n 80000 6.0586 66.9258\r\n100000 8.4453 75.3711\r\n120000 6.7422 82.1133\r\n140000 8.5664 90.6797\r\n160000 5.7344 96.4141\r\n180000 8.3398 104.7539\r\n```\r\n\r\nYou can see the last column of total RSS memory keeps on growing in MBs. The mid column is by how much it was grown during a single iteration of the repro script (20000 items)","@NouamaneTazi, please check my analysis here https:\/\/github.com\/huggingface\/datasets\/issues\/4883#issuecomment-1242599722 so if you agree with my research this Issue can be closed as well.\r\n\r\nI also made a suggestion at how to proceed to hunt for a real leak here https:\/\/github.com\/huggingface\/datasets\/issues\/4883#issuecomment-1242600626\r\n\r\nyou may find this one to be useful as well https:\/\/github.com\/huggingface\/datasets\/issues\/4883#issuecomment-1242597966","Amazing job! Thanks for taking time to debug this \ud83e\udd17\r\n\r\nFor my side, I tried to do some more research as well, but to no avail. https:\/\/github.com\/huggingface\/datasets\/issues\/4883#issuecomment-1243415957"],"created_at":1655719394000,"updated_at":1662972699000,"closed_at":1662972699000,"author_association":"MEMBER","active_lock_reason":null,"body":"e## Describe the bug\r\nIt seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop)\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport gc\r\nimport logging\r\nimport time\r\nimport pyarrow\r\nfrom datasets import load_dataset\r\nfrom tqdm import trange\r\nimport os, psutil\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\nlogger = logging.getLogger(__name__)\r\nprocess = psutil.Process(os.getpid())\r\n\r\nprint(process.memory_info().rss) # output: 633507840 bytes\r\n\r\ncorpus = load_dataset(\"BeIR\/msmarco\", 'corpus', keep_in_memory=False, streaming=False)['corpus'] # or \"BeIR\/trec-covid\" for a smaller dataset\r\n\r\nprint(process.memory_info().rss) # output: 698601472 bytes\r\n\r\nlogger.info(\"Applying method to all examples in all splits\")\r\nfor i in trange(0, len(corpus), 1000):\r\n batch = corpus[i:i+1000]\r\n data = pyarrow.total_allocated_bytes()\r\n if data > 0:\r\n logger.info(f\"{i}\/{len(corpus)}: {data}\")\r\n\r\nprint(process.memory_info().rss) # output: 3788247040 bytes\r\n\r\ndel batch\r\ngc.collect()\r\n\r\nprint(process.memory_info().rss) # output: 3788247040 bytes\r\n\r\nlogger.info(\"Done...\")\r\ntime.sleep(100)\r\n```\r\n\r\n## Expected results\r\nLimited memory usage, and memory to be freed after processing\r\n\r\n## Actual results\r\nMemory leak\r\n![test](https:\/\/user-images.githubusercontent.com\/29777165\/174578276-f2c37e6c-b5d8-4985-b4d8-8413eb2b3241.png)\r\nYou can see how the memory allocation keeps increasing until it reaches a steady state when we hit the `time.sleep(100)`, which showcases that even the garbage collector couldn't free the allocated memory\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4528\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4528\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4527","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4527\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4527\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4527\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4527","id":1276583536,"node_id":"I_kwDODunzps5MFx5w","number":4527,"title":"Dataset Viewer issue for vadis\/sv-ident","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Fixed, thanks!\r\n![Uploading Capture d\u2019e\u0301cran 2022-06-21 a\u0300 18.42.40.png\u2026]()\r\n\r\n"],"created_at":1655714862000,"updated_at":1655829766000,"closed_at":1655829765000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/vadis\/sv-ident\n\n### Description\n\nThe dataset preview does not work:\r\n```\r\nServer Error\r\n\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The dataset does not exist.\r\n```\r\n\r\nHowever, the dataset is streamable and works locally:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"sv-ident.py\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\nUsing custom data configuration default\r\nOut[1]: \r\n{'sentence': 'Our point, however, is that so long as downward (favorable) comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.',\r\n 'is_variable': 1,\r\n 'variable': ['exploredata-ZA5400_VarV66', 'exploredata-ZA5400_VarV53'],\r\n 'research_data': ['ZA5400'],\r\n 'doc_id': '73106',\r\n 'uuid': 'b9fbb80f-3492-4b42-b9d5-0254cc33ac10',\r\n 'lang': 'en'}\r\n```\r\n\r\nCC: @e-tornike\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4527\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4527\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4526","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4526\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4526\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4526\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4526","id":1276580185,"node_id":"I_kwDODunzps5MFxFZ","number":4526,"title":"split cache used when processing different split","user":{"login":"gpucce","id":32967787,"node_id":"MDQ6VXNlcjMyOTY3Nzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32967787?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gpucce","html_url":"https:\/\/github.com\/gpucce","followers_url":"https:\/\/api.github.com\/users\/gpucce\/followers","following_url":"https:\/\/api.github.com\/users\/gpucce\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gpucce\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gpucce\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gpucce\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gpucce\/orgs","repos_url":"https:\/\/api.github.com\/users\/gpucce\/repos","events_url":"https:\/\/api.github.com\/users\/gpucce\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gpucce\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I was not able to reproduce this behavior (I tried without using pytorch lightning though, since I don't know what code you ran in pytorch lightning to get this).\r\n\r\nIf you can provide a MWE that would be perfect ! :)","Hi, I think the issue happened because I was loading datasets under an `if` ... `else` statement and the condition would change the dataset I would need to load but instead the cached one was always returned. However, I believe that is expected behaviour, if so I'll close the issue.\r\n\r\nOtherwise I will try to provide a MWE"],"created_at":1655714698000,"updated_at":1656425098000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug`\r\n```\r\nds1 = load_dataset('squad', split='validation')\r\nds2 = load_dataset('squad', split='train')\r\nds1 = ds1.map(some_function)\r\nds2 = ds2.map(some_function)\r\nassert ds1 == ds2\r\n```\r\nThis happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through\r\n\r\n```\r\nclass myDataModule:\r\n\r\n def train_dataloader(self):\r\n ds = load_dataset('squad', split='train')\r\n ds = ds.map(some_function)\r\n return [ds]\r\n\r\n def val_dataloader(self):\r\n ds = load_dataset('squad', split=\"validation\")\r\n ds = ds.map(some_function)\r\n return [ds]\r\n```\r\nI don't know if it depends on `pytorch_lightning` or `datasets` but setting `ds.map(some_function, load_from_cache_file=False)` fixes the issue.\r\n\r\nIf this is not enough to replicate I will try and provide and MWE, I don't have time now so I thought I wuld open the issue first!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4526\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4526\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4525","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4525\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4525\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4525\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4525","id":1276491386,"node_id":"I_kwDODunzps5MFbZ6","number":4525,"title":"Out of memory error on workers while running Beam+Dataflow","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?","@albertvillanova We were finally able to process the full NQ dataset on our machines using 600 gb with 5 workers. Maybe these numbers will work for you as well.","Thanks a lot for the hint, @seirasto.\r\n\r\nI have one question: what runner did you use? Direct, Apache Flink\/Nemo\/Samza\/Spark, Google Dataflow...? Thank you.","I asked my colleague who ran the code and he said apache beam.","@albertvillanova Since we have already processed the NQ dataset on our machines can we upload it to datasets so the NQ PR can be merged?","Maybe @lhoestq can give a more accurate answer as I am not sure about the authentication requirements to upload those files to our cloud bucket.\r\n\r\nAnyway I propose to continue this discussion on the dedicated PR for Natural questions dataset:\r\n- #4368","> I asked my colleague who ran the code and he said apache beam.\r\n\r\nHe looked into it further and he just used DirectRunner. @albertvillanova ","OK, thank you @seirasto for your hint.\r\n\r\nThat explains why you did not encounter the out of memory error: this only appears when the processing is distributed (on workers memory) and DirectRunner does not distribute the processing (all is done in a single machine). "],"created_at":1655710092000,"updated_at":1656581637000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nWhile running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the \"default\" config (train+dev files).\r\n\r\nPreviously we ran the preprocessing for the \"dev\" config (only dev files) with success.\r\n\r\nTrain data files are larger than dev ones and apparently workers run out of memory while processing them.\r\n\r\nAny help\/hint is welcome!\r\n\r\nError message:\r\n```\r\nData channel closed, unable to receive additional data from SDK sdk-0-0\r\n```\r\n\r\nInfo from the Diagnostics tab:\r\n```\r\nOut of memory: Killed process 1882 (python) total-vm:6041764kB, anon-rss:3290928kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:9520kB oom_score_adj:900\r\nThe worker VM had to shut down one or more processes due to lack of memory.\r\n```\r\n\r\n## Additional information\r\n\r\n### Stack trace\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/albert_huggingface_co\/natural_questions\/venv\/bin\/datasets-cli\", line 8, in \r\n sys.exit(main())\r\n File \"\/home\/albert_huggingface_co\/natural_questions\/venv\/lib\/python3.9\/site-packages\/datasets\/commands\/datasets_cli.py\", line 39, in main\r\n service.run()\r\n File \"\/home\/albert_huggingface_co\/natural_questions\/venv\/lib\/python3.9\/site-packages\/datasets\/commands\/run_beam.py\", line 127, in run\r\n builder.download_and_prepare(\r\n File \"\/home\/albert_huggingface_co\/natural_questions\/venv\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/albert_huggingface_co\/natural_questions\/venv\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1389, in _download_and_prepare\r\n pipeline_results.wait_until_finish()\r\n File \"\/home\/albert_huggingface_co\/natural_questions\/venv\/lib\/python3.9\/site-packages\/apache_beam\/runners\/dataflow\/dataflow_runner.py\", line 1667, in wait_until_finish\r\n raise DataflowRuntimeException(\r\napache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:\r\nData channel closed, unable to receive additional data from SDK sdk-0-0\r\n```\r\n\r\n### Logs\r\n```\r\nError message from worker: Data channel closed, unable to receive additional data from SDK sdk-0-0\r\n\r\nWorkflow failed. Causes: S30:train\/ReadAllFromText\/ReadAllFiles\/Reshard\/ReshufflePerKey\/GroupByKey\/Read+train\/ReadAllFromText\/ReadAllFiles\/Reshard\/ReshufflePerKey\/GroupByKey\/GroupByWindow+train\/ReadAllFromText\/ReadAllFiles\/Reshard\/ReshufflePerKey\/FlatMap(restore_timestamps)+train\/ReadAllFromText\/ReadAllFiles\/Reshard\/RemoveRandomKeys+train\/ReadAllFromText\/ReadAllFiles\/ReadRange+train\/Map(_parse_example)+train\/Encode+train\/Count N. Examples+train\/Get values\/Values+train\/Save to parquet\/Write\/WriteImpl\/WindowInto(WindowIntoFn)+train\/Save to parquet\/Write\/WriteImpl\/WriteBundles+train\/Save to parquet\/Write\/WriteImpl\/Pair+train\/Save to parquet\/Write\/WriteImpl\/GroupByKey\/Write failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https:\/\/cloud.google.com\/dataflow\/docs\/guides\/common-errors. The work item was attempted on these workers: beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: Data channel closed, unable to receive additional data from SDK sdk-0-0, beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-bwsj Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-5052 Root cause: The worker lost contact with the service.\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4525\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4525\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4524","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4524\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4524\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4524\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4524","id":1275909186,"node_id":"I_kwDODunzps5MDNRC","number":4524,"title":"Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException)","user":{"login":"dan-the-meme-man","id":45244059,"node_id":"MDQ6VXNlcjQ1MjQ0MDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45244059?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dan-the-meme-man","html_url":"https:\/\/github.com\/dan-the-meme-man","followers_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/followers","following_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/orgs","repos_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/repos","events_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dan-the-meme-man\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @dan-the-meme-man, thanks for reporting.\r\n\r\nWe are investigating a similar issue but with Beam+Dataflow (instead of Beam+Flink): \r\n- #4525\r\n\r\nIn order to go deeper into the root cause, we need as much information as possible: logs from the main process + logs from the workers are very informative.\r\n\r\nIn the case of the issue with Beam+Dataflow, the logs from the workers report an out of memory issue.","As I continued working on this today, I came to suspect that it is in fact an out of memory issue - I have a few more notebooks that I've left running, and if they produce the same error, I will try to get the logs. In the meantime, if there's any chance that there is a repo out there with those three languages already as .arrow files, or if you know about how much memory would be needed to actually download those sets, please let me know!"],"created_at":1655595405000,"updated_at":1655771900000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packages seem to be incompatible in terms of versions (dill and requests, for instance). It should be noted that the following code runs for several hours without issue, executing the `load_dataset()` function, before the exception occurs.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# bash commands\r\n!pip install datasets\r\n!pip install apache-beam[interactive]\r\n!pip install mwparserfromhell\r\n!pip install dill==0.3.5.1\r\n!pip install requests==2.23.0\r\n\r\n# imports\r\nimport os\r\nfrom datasets import load_dataset\r\nimport apache_beam as beam\r\nimport mwparserfromhell\r\nfrom google.colab import drive\r\nimport dill\r\nimport requests\r\n\r\n# mount drive\r\ndrive_dir = os.path.join(os.getcwd(), 'drive')\r\ndrive.mount(drive_dir)\r\n\r\n# confirming the versions of these two packages are the ones that are suggested by the outputs from the bash commands\r\nprint(dill.__version__)\r\nprint(requests.__version__)\r\n\r\nlang = 'es' # or 'ru' or 'ceb' - these are the ones causing the issue\r\nlang_dir = os.path.join(drive_dir, 'path\/to\/my\/folder', lang)\r\n\r\nif not os.path.exists(lang_dir):\r\n x = None\r\n x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink',\r\n split='train')\r\n x.save_to_disk(lang_dir)\r\n```\r\n\r\n## Expected results\r\nAlthough some warnings are generally produced by this code (run in Colab Notebook), most languages I've tried have been successfully downloaded. It should simply go through without issue, but for these languages, I am continually encountering this error.\r\n\r\n## Actual results\r\nTraceback below:\r\n```\r\nException in thread run_worker_3-1:\r\nTraceback (most recent call last):\r\n File \"\/usr\/lib\/python3.7\/threading.py\", line 926, in _bootstrap_inner\r\n self.run()\r\n File \"\/usr\/lib\/python3.7\/threading.py\", line 870, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 234, in run\r\n for work_request in self._control_stub.Control(get_responses()):\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/grpc\/_channel.py\", line 426, in __next__\r\n return self._next()\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/grpc\/_channel.py\", line 826, in _next\r\n raise self\r\ngrpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:\r\n\tstatus = StatusCode.UNAVAILABLE\r\n\tdetails = \"Socket closed\"\r\n\tdebug_error_string = \"{\"created\":\"@1655593643.871830638\",\"description\":\"Error received from peer ipv4:127.0.0.1:44441\",\"file\":\"src\/core\/lib\/surface\/call.cc\",\"file_line\":952,\"grpc_message\":\"Socket closed\",\"grpc_status\":14}\"\r\n>\r\n\r\nTraceback (most recent call last):\r\n File \"apache_beam\/runners\/common.py\", line 1198, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 426, in __getitem__\r\n self._cache[target_window] = self._side_input_data.view_fn(raw_view)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/pvalue.py\", line 391, in \r\n lambda iterable: from_runtime_iterable(iterable, view_options))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/pvalue.py\", line 512, in _from_runtime_iterable\r\n head = list(itertools.islice(it, 2))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1228, in _lazy_iterator\r\n self._underlying.get_raw(state_key, continuation_token))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1019, in get_raw\r\n continuation_token=continuation_token)))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1060, in _blocking_request\r\n raise RuntimeError(response.error)\r\nRuntimeError: Unknown process bundle instruction id '26'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 267, in _execute\r\n response = task()\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 340, in \r\n lambda: self.create_worker().do_instruction(request), request)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 581, in do_instruction\r\n getattr(request, request_type), request.instruction_id)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 618, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 996, in process_bundle\r\n element.data)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 221, in process_encoded\r\n self.output(decoded_value)\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 346, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 348, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 707, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 708, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/common.py\", line 1200, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\/runners\/common.py\", line 1198, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 426, in __getitem__\r\n self._cache[target_window] = self._side_input_data.view_fn(raw_view)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/pvalue.py\", line 391, in \r\n lambda iterable: from_runtime_iterable(iterable, view_options))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/pvalue.py\", line 512, in _from_runtime_iterable\r\n head = list(itertools.islice(it, 2))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1228, in _lazy_iterator\r\n self._underlying.get_raw(state_key, continuation_token))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1019, in get_raw\r\n continuation_token=continuation_token)))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1060, in _blocking_request\r\n raise RuntimeError(response.error)\r\nRuntimeError: Unknown process bundle instruction id '26' [while running 'train\/Save to parquet\/Write\/WriteImpl\/WriteBundles']\r\n\r\nERROR:apache_beam.runners.worker.sdk_worker:Error processing instruction 26. Original traceback is\r\nTraceback (most recent call last):\r\n File \"apache_beam\/runners\/common.py\", line 1198, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 426, in __getitem__\r\n self._cache[target_window] = self._side_input_data.view_fn(raw_view)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/pvalue.py\", line 391, in \r\n lambda iterable: from_runtime_iterable(iterable, view_options))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/pvalue.py\", line 512, in _from_runtime_iterable\r\n head = list(itertools.islice(it, 2))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1228, in _lazy_iterator\r\n self._underlying.get_raw(state_key, continuation_token))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1019, in get_raw\r\n continuation_token=continuation_token)))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1060, in _blocking_request\r\n raise RuntimeError(response.error)\r\nRuntimeError: Unknown process bundle instruction id '26'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 267, in _execute\r\n response = task()\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 340, in \r\n lambda: self.create_worker().do_instruction(request), request)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 581, in do_instruction\r\n getattr(request, request_type), request.instruction_id)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 618, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 996, in process_bundle\r\n element.data)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 221, in process_encoded\r\n self.output(decoded_value)\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 346, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 348, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 707, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 708, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/common.py\", line 1200, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\/runners\/common.py\", line 1198, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 426, in __getitem__\r\n self._cache[target_window] = self._side_input_data.view_fn(raw_view)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/pvalue.py\", line 391, in \r\n lambda iterable: from_runtime_iterable(iterable, view_options))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/pvalue.py\", line 512, in _from_runtime_iterable\r\n head = list(itertools.islice(it, 2))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1228, in _lazy_iterator\r\n self._underlying.get_raw(state_key, continuation_token))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1019, in get_raw\r\n continuation_token=continuation_token)))\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 1060, in _blocking_request\r\n raise RuntimeError(response.error)\r\nRuntimeError: Unknown process bundle instruction id '26' [while running 'train\/Save to parquet\/Write\/WriteImpl\/WriteBundles']\r\n\r\n\r\nERROR:root:org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled\r\nERROR:apache_beam.runners.worker.data_plane:Failed to read inputs in the data plane.\r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/data_plane.py\", line 634, in _read_inputs\r\n for elements in elements_iterator:\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/grpc\/_channel.py\", line 426, in __next__\r\n return self._next()\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/grpc\/_channel.py\", line 826, in _next\r\n raise self\r\ngrpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:\r\n\tstatus = StatusCode.CANCELLED\r\n\tdetails = \"Multiplexer hanging up\"\r\n\tdebug_error_string = \"{\"created\":\"@1655593654.436885887\",\"description\":\"Error received from peer ipv4:127.0.0.1:43263\",\"file\":\"src\/core\/lib\/surface\/call.cc\",\"file_line\":952,\"grpc_message\":\"Multiplexer hanging up\",\"grpc_status\":1}\"\r\n>\r\nException in thread read_grpc_client_inputs:\r\nTraceback (most recent call last):\r\n File \"\/usr\/lib\/python3.7\/threading.py\", line 926, in _bootstrap_inner\r\n self.run()\r\n File \"\/usr\/lib\/python3.7\/threading.py\", line 870, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/data_plane.py\", line 651, in \r\n target=lambda: self._read_inputs(elements_iterator),\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/worker\/data_plane.py\", line 634, in _read_inputs\r\n for elements in elements_iterator:\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/grpc\/_channel.py\", line 426, in __next__\r\n return self._next()\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/grpc\/_channel.py\", line 826, in _next\r\n raise self\r\ngrpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:\r\n\tstatus = StatusCode.CANCELLED\r\n\tdetails = \"Multiplexer hanging up\"\r\n\tdebug_error_string = \"{\"created\":\"@1655593654.436885887\",\"description\":\"Error received from peer ipv4:127.0.0.1:43263\",\"file\":\"src\/core\/lib\/surface\/call.cc\",\"file_line\":952,\"grpc_message\":\"Multiplexer hanging up\",\"grpc_status\":1}\"\r\n>\r\n\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n[\/tmp\/ipykernel_219\/3869142325.py](https:\/\/localhost:8080\/#) in \r\n 18 x = None\r\n 19 x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink',\r\n---> 20 split='train')\r\n 21 x.save_to_disk(lang_dir)\r\n\r\n3 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/apache_beam\/runners\/portability\/portable_runner.py](https:\/\/localhost:8080\/#) in wait_until_finish(self, duration)\r\n 604 \r\n 605 if self._runtime_exception:\r\n--> 606 raise self._runtime_exception\r\n 607 \r\n 608 return self._state\r\n\r\nRuntimeError: Pipeline BeamApp-root-0618220708-b3b59a0e_d8efcf67-9119-4f76-b013-70de7b29b54d failed in state FAILED: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4524\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4524\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4523","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4523\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4523\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4523\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4523","id":1275002639,"node_id":"PR_kwDODunzps452hgh","number":4523,"title":"Update download url and improve card of `cats_vs_dogs` dataset","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655470784000,"updated_at":1655821406000,"closed_at":1655820788000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Improve the download URL (reported here: https:\/\/huggingface.co\/datasets\/cats_vs_dogs\/discussions\/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4523\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4523\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4523","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4523","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4523.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4523.patch","merged_at":1655820788000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4522","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4522\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4522\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4522\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4522","id":1274929328,"node_id":"I_kwDODunzps5L_eCw","number":4522,"title":"Try to reduce the number of datasets that require manual download","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1655466123000,"updated_at":1655466768000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"> Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to \u2248 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore\r\n\r\nfrom https:\/\/github.com\/huggingface\/datasets-server\/issues\/12#issuecomment-1026920432","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4522\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4522\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4521","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4521\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4521\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4521\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4521","id":1274919437,"node_id":"I_kwDODunzps5L_boN","number":4521,"title":"Datasets method `.map` not hashing","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Fix posted: https:\/\/github.com\/huggingface\/datasets\/issues\/4506#issuecomment-1157417219","Didn't realize it's a bug when I asked the question yesterday! Free free to post an answer if you are sure the cause has been addressed.\r\n\r\nhttps:\/\/stackoverflow.com\/questions\/72664827\/can-pickle-dill-foo-but-not-lambda-x-foox","Thank @nalzok . That works for me:\r\n\r\n`pip install \"dill<0.3.5\"`"],"created_at":1655465470000,"updated_at":1659614896000,"closed_at":1656422585000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nDatasets method `.map` not hashing, even with an empty no-op function\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# download 9MB dummy dataset\r\nds = load_dataset(\"hf-internal-testing\/librispeech_asr_dummy\", \"clean\")\r\n\r\ndef prepare_dataset(batch):\r\n return(batch)\r\n\r\nds = ds.map(\r\n prepare_dataset,\r\n num_proc=1,\r\n desc=\"preprocess train dataset\",\r\n)\r\n```\r\n\r\n## Expected results\r\nHashed and cached dataset preprocessing\r\n\r\n## Actual results\r\nDoes not hash properly:\r\n```\r\nParameter 'function'= of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.3.dev0\r\n- Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.9.12\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n\r\ncc @lhoestq \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4521\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4521\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4520","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4520\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4520\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4520\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4520","id":1274879180,"node_id":"I_kwDODunzps5L_RzM","number":4520,"title":"Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map`","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think this has been fixed by #4516, let me know if you encounter this again :)\r\n\r\nI re-ran your code in 3.7 and 3.9 and it works fine","Thank you!"],"created_at":1655462837000,"updated_at":1656427637000,"closed_at":1656425069000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/speech-recognition\/run_speech_recognition_ctc.py)). Since dataclasses cannot be hashed, one has to define separate variables prior to passing dataclass attributes to the `.map` method:\r\n```python\r\nphoneme_language = data_args.phoneme_language\r\n```\r\nin the example https:\/\/github.com\/huggingface\/transformers\/blob\/3c7e56fbb11f401de2528c1dcf0e282febc031cd\/examples\/pytorch\/speech-recognition\/run_speech_recognition_ctc.py#L603-L630\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom dataclasses import dataclass, field\r\nfrom datasets.fingerprint import Hasher\r\n\r\n@dataclass\r\nclass DataTrainingArguments:\r\n \"\"\"\r\n Arguments pertaining to what data we are going to input our model for training and eval.\r\n \"\"\"\r\n\r\n phoneme_language: str = field(\r\n default=None, metadata={\"help\": \"The name of the phoneme language to use.\"}\r\n )\r\n\r\ndata_args = DataTrainingArguments(phoneme_language =\"foo\")\r\n\r\nHasher.hash(data_args)\r\n\r\nphoneme_language = data_args.phoneme_language\r\n\r\nHasher.hash(phoneme_language)\r\n```\r\n\r\n## Expected results\r\nA hash.\r\n## Actual results\r\n
\r\n Traceback <\/summary>\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\nInput In [1], in ()\r\n 10 phoneme_language: str = field(\r\n 11 default=None, metadata={\"help\": \"The name of the phoneme language to use.\"}\r\n 12 )\r\n 14 data_args = DataTrainingArguments(phoneme_language =\"foo\")\r\n---> 16 Hasher.hash(data_args)\r\n 18 phoneme_language = data_args. phoneme_language\r\n 20 Hasher.hash(phoneme_language)\r\n\r\nFile ~\/datasets\/src\/datasets\/fingerprint.py:237, in Hasher.hash(cls, value)\r\n 235 return cls.dispatch[type(value)](cls, value)\r\n 236 else:\r\n--> 237 return cls.hash_default(value)\r\n\r\nFile ~\/datasets\/src\/datasets\/fingerprint.py:230, in Hasher.hash_default(cls, value)\r\n 228 @classmethod\r\n 229 def hash_default(cls, value: Any) -> str:\r\n--> 230 return cls.hash_bytes(dumps(value))\r\n\r\nFile ~\/datasets\/src\/datasets\/utils\/py_utils.py:564, in dumps(obj)\r\n 562 file = StringIO()\r\n 563 with _no_cache_fields(obj):\r\n--> 564 dump(obj, file)\r\n 565 return file.getvalue()\r\n\r\nFile ~\/datasets\/src\/datasets\/utils\/py_utils.py:539, in dump(obj, file)\r\n 537 def dump(obj, file):\r\n 538 \"\"\"pickle an object to a file\"\"\"\r\n--> 539 Pickler(file, recurse=True).dump(obj)\r\n 540 return\r\n\r\nFile ~\/hf\/lib\/python3.8\/site-packages\/dill\/_dill.py:620, in Pickler.dump(self, obj)\r\n 618 raise PicklingError(msg)\r\n 619 else:\r\n--> 620 StockPickler.dump(self, obj)\r\n 621 return\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:487, in _Pickler.dump(self, obj)\r\n 485 if self.proto >= 4:\r\n 486 self.framer.start_framing()\r\n--> 487 self.save(obj)\r\n 488 self.write(STOP)\r\n 489 self.framer.end_framing()\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:603, in _Pickler.save(self, obj, save_persistent_id)\r\n 599 raise PicklingError(\"Tuple returned by %s must have \"\r\n 600 \"two to six elements\" % reduce)\r\n 602 # Save the reduce() output and finally memoize the object\r\n--> 603 self.save_reduce(obj=obj, *rv)\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:687, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)\r\n 684 raise PicklingError(\r\n 685 \"args[0] from __newobj__ args has the wrong class\")\r\n 686 args = args[1:]\r\n--> 687 save(cls)\r\n 688 save(args)\r\n 689 write(NEWOBJ)\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id)\r\n 558 f = self.dispatch.get(t)\r\n 559 if f is not None:\r\n--> 560 f(self, obj) # Call unbound method with explicit self\r\n 561 return\r\n 563 # Check private dispatch table if any, or else\r\n 564 # copyreg.dispatch_table\r\n\r\nFile ~\/hf\/lib\/python3.8\/site-packages\/dill\/_dill.py:1838, in save_type(pickler, obj, postproc_list)\r\n 1836 postproc_list = []\r\n 1837 postproc_list.append((setattr, (obj, '__qualname__', obj_name)))\r\n-> 1838 _save_with_postproc(pickler, (_create_type, (\r\n 1839 type(obj), obj.__name__, obj.__bases__, _dict\r\n 1840 )), obj=obj, postproc_list=postproc_list)\r\n 1841 log.info(\"# %s\" % _t)\r\n 1842 else:\r\n\r\nFile ~\/hf\/lib\/python3.8\/site-packages\/dill\/_dill.py:1140, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list)\r\n 1137 pickler._postproc[id(obj)] = postproc_list\r\n 1139 # TODO: Use state_setter in Python 3.8 to allow for faster cPickle implementations\r\n-> 1140 pickler.save_reduce(*reduction, obj=obj)\r\n 1142 if is_pickler_dill:\r\n 1143 # pickler.x -= 1\r\n 1144 # print(pickler.x*' ', 'pop', obj, id(obj))\r\n 1145 postproc = pickler._postproc.pop(id(obj))\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:692, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)\r\n 690 else:\r\n 691 save(func)\r\n--> 692 save(args)\r\n 693 write(REDUCE)\r\n 695 if obj is not None:\r\n 696 # If the object is already in the memo, this means it is\r\n 697 # recursive. In this case, throw away everything we put on the\r\n 698 # stack, and fetch the object back from the memo.\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id)\r\n 558 f = self.dispatch.get(t)\r\n 559 if f is not None:\r\n--> 560 f(self, obj) # Call unbound method with explicit self\r\n 561 return\r\n 563 # Check private dispatch table if any, or else\r\n 564 # copyreg.dispatch_table\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:901, in _Pickler.save_tuple(self, obj)\r\n 899 write(MARK)\r\n 900 for element in obj:\r\n--> 901 save(element)\r\n 903 if id(obj) in memo:\r\n 904 # Subtle. d was not in memo when we entered save_tuple(), so\r\n 905 # the process of saving the tuple's elements must have saved\r\n (...)\r\n 909 # could have been done in the \"for element\" loop instead, but\r\n 910 # recursive tuples are a rare thing.\r\n 911 get = self.get(memo[id(obj)][0])\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id)\r\n 558 f = self.dispatch.get(t)\r\n 559 if f is not None:\r\n--> 560 f(self, obj) # Call unbound method with explicit self\r\n 561 return\r\n 563 # Check private dispatch table if any, or else\r\n 564 # copyreg.dispatch_table\r\n\r\nFile ~\/hf\/lib\/python3.8\/site-packages\/dill\/_dill.py:1251, in save_module_dict(pickler, obj)\r\n 1248 if is_dill(pickler, child=False) and pickler._session:\r\n 1249 # we only care about session the first pass thru\r\n 1250 pickler._first_pass = False\r\n-> 1251 StockPickler.save_dict(pickler, obj)\r\n 1252 log.info(\"# D2\")\r\n 1253 return\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:971, in _Pickler.save_dict(self, obj)\r\n 968 self.write(MARK + DICT)\r\n 970 self.memoize(obj)\r\n--> 971 self._batch_setitems(obj.items())\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:997, in _Pickler._batch_setitems(self, items)\r\n 995 for k, v in tmp:\r\n 996 save(k)\r\n--> 997 save(v)\r\n 998 write(SETITEMS)\r\n 999 elif n:\r\n\r\nFile \/usr\/lib\/python3.8\/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id)\r\n 558 f = self.dispatch.get(t)\r\n 559 if f is not None:\r\n--> 560 f(self, obj) # Call unbound method with explicit self\r\n 561 return\r\n 563 # Check private dispatch table if any, or else\r\n 564 # copyreg.dispatch_table\r\n\r\nFile ~\/datasets\/src\/datasets\/utils\/py_utils.py:862, in save_function(pickler, obj)\r\n 859 if state_dict:\r\n 860 state = state, state_dict\r\n--> 862 dill._dill._save_with_postproc(\r\n 863 pickler,\r\n 864 (\r\n 865 dill._dill._create_function,\r\n 866 (obj.__code__, globs, obj.__name__, obj.__defaults__, closure),\r\n 867 state,\r\n 868 ),\r\n 869 obj=obj,\r\n 870 postproc_list=postproc_list,\r\n 871 )\r\n 872 else:\r\n 873 closure = obj.func_closure\r\n\r\nFile ~\/hf\/lib\/python3.8\/site-packages\/dill\/_dill.py:1153, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list)\r\n 1151 dest, source = reduction[1]\r\n 1152 if source:\r\n-> 1153 pickler.write(pickler.get(pickler.memo[id(dest)][0]))\r\n 1154 pickler._batch_setitems(iter(source.items()))\r\n 1155 else:\r\n 1156 # Updating with an empty dictionary. Same as doing nothing.\r\n\r\nKeyError: 140434581781568\r\n```\r\n\r\n<\/details>\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.3.dev0\r\n- Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n\r\ncc @lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4520\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4520\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4519","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4519\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4519\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4519\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4519","id":1274110623,"node_id":"PR_kwDODunzps45zhqa","number":4519,"title":"Create new sections for audio and vision in guides","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Ready for review!\r\n\r\nThe `toctree` is a bit longer now with the sections. I think if we keep the audio\/vision\/text\/dataset repository sections collapsed by default, and keep the general usage expanded, it may look a little cleaner and not as overwhelming. Let me know what you think! \ud83d\ude04 "],"created_at":1655415504000,"updated_at":1657208197000,"closed_at":1657207498000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR creates separate sections in the guides for audio, vision, text, and general usage so it is easier for users to find loading, processing, or sharing guides specific to the dataset type they're working with. It'll also allow us to scale the docs to additional dataset types - like time series, tabular, etc. - while keeping our docs information architecture. \r\n\r\nSome other changes include:\r\n\r\n- ~Experimented with decorating text with some CSS to highlight guides specific to each modality. Hopefully, it'll be easier for users to find and realize that these different docs exist!~ Will experiment with this in a different PR.\r\n- Added deprecation warning for Metrics and redirect to Evaluate.\r\n- Updated `set_format` section to recommend using the new `to_tf_dataset` function if you need to convert to a TensorFlow dataset.\r\n- Reorganized `toctree` to nest general usage, audio, vision, and text sections under the how-to guides.\r\n- A quick review and edit to the Load and Process docs for clarity.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4519\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4519\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4519","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4519","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4519.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4519.patch","merged_at":1657207498000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4518","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4518\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4518\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4518\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4518","id":1274010628,"node_id":"PR_kwDODunzps45zMnB","number":4518,"title":"Patch tests for hfh v0.8.0","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655408732000,"updated_at":1655482557000,"closed_at":1655481967000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR patches testing utilities that would otherwise fail with hfh v0.8.0.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4518\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4518\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4518","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4518","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4518.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4518.patch","merged_at":1655481967000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4517","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4517\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4517\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4517\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4517","id":1273960476,"node_id":"PR_kwDODunzps45zBl0","number":4517,"title":"Add tags for task_ids:summarization-* and task_categories:summarization*","user":{"login":"hobson","id":292855,"node_id":"MDQ6VXNlcjI5Mjg1NQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/292855?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hobson","html_url":"https:\/\/github.com\/hobson","followers_url":"https:\/\/api.github.com\/users\/hobson\/followers","following_url":"https:\/\/api.github.com\/users\/hobson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hobson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hobson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hobson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hobson\/orgs","repos_url":"https:\/\/api.github.com\/users\/hobson\/repos","events_url":"https:\/\/api.github.com\/users\/hobson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hobson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Associated community discussion is [here](https:\/\/huggingface.co\/datasets\/aeslc\/discussions\/1).\r\nPaper referenced in the `dataset_infos.json` is [here](https:\/\/arxiv.org\/pdf\/1906.03497.pdf). It mentions the _email-subject-generation_ task, which is not a tag mentioned in any other dataset so it was not added in this pull request. The _summarization_ task is mentioned as a related task.","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655405545000,"updated_at":1657293263000,"closed_at":1657292551000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"yaml header at top of README.md file was edited to add task tags because I couldn't find the existing tags in the json\r\nseparate Pull Request will modify dataset_infos.json to add these tags\r\n\r\nThe Enron dataset (dataset id aeslc) is only tagged with:\r\n\r\n arxiv:1906.03497'\r\n languages:en\r\n pretty_name:AESLC\r\n\r\nUsing the email subject_line field as a label or target variable it possible to create models for the following task_ids (in order of relevance):\r\n\r\n 'task_ids:summarization'\r\n 'task_ids:summarization-other-conversations-summarization'\r\n \"task_ids:other-other-query-based-multi-document-summarization\"\r\n 'task_ids:summarization-other-aspect-based-summarization'\r\n 'task_ids:summarization--other-headline-generation'\r\n\r\nThe subject might also be used for the task_category \"task_categories:summarization\"\r\n\r\nE-mail chains might be used for the task category \"task_categories:dialogue-system\"","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4517\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4517\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4517","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4517","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4517.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4517.patch","merged_at":1657292551000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4516","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4516\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4516\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4516\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4516","id":1273825640,"node_id":"PR_kwDODunzps45ykYX","number":4516,"title":"Fix hashing for python 3.9","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","What do you think @albertvillanova ?"],"created_at":1655397751000,"updated_at":1656423226000,"closed_at":1656422586000,"author_association":"MEMBER","active_lock_reason":null,"body":"In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function.\r\n\r\nTherefore the test at `tests\/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9\r\n\r\nTo make hashing deterministic when the globals are not in the same order, we also need to make the order of `glob_ids` deterministic.\r\n\r\nRight now we don't have a CI to test python 3.9 but we should definitely have one. For this PR in particular I ran the tests locally using python 3.9 and they're passing now.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4506","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4516\/reactions","total_count":4,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":4,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4516\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4516","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4516","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4516.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4516.patch","merged_at":1656422585000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4515","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4515\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4515\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4515\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4515","id":1273626131,"node_id":"PR_kwDODunzps45x5mB","number":4515,"title":"Add uppercased versions of image file extensions for automatic module inference","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655388889000,"updated_at":1655400113000,"closed_at":1655399501000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adds the uppercased versions of the image file extensions to the supported extensions. \r\n\r\nAnother approach would be to call `.lower()` on extensions while resolving data files, but uppercased extensions are not something we want to encourage out of the box IMO unless they are commonly used (as they are in the vision domain)\r\n\r\nNote that there is a slight discrepancy between the image file resolution and `imagefolder` as the latter calls `.lower()` on file extensions leading to some image file extensions being ignored by the resolution but not by the loader (e.g. `pNg`). Such extensions should also be discouraged, so I'm ignoring that case too.\r\n\r\nFix #4514. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4515\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4515\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4515","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4515","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4515.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4515.patch","merged_at":1655399500000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4514","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4514\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4514\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4514\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4514","id":1273505230,"node_id":"I_kwDODunzps5L6CXO","number":4514,"title":"Allow .JPEG as a file extension","user":{"login":"DiGyt","id":34550289,"node_id":"MDQ6VXNlcjM0NTUwMjg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34550289?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DiGyt","html_url":"https:\/\/github.com\/DiGyt","followers_url":"https:\/\/api.github.com\/users\/DiGyt\/followers","following_url":"https:\/\/api.github.com\/users\/DiGyt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DiGyt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DiGyt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DiGyt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DiGyt\/orgs","repos_url":"https:\/\/api.github.com\/users\/DiGyt\/repos","events_url":"https:\/\/api.github.com\/users\/DiGyt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DiGyt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, thanks for reporting! I've opened a PR with the fix.","Wow, that was quick! Thank you very much \ud83d\ude4f "],"created_at":1655382980000,"updated_at":1655713126000,"closed_at":1655399500000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# use bash to create 2 sham datasets with jpeg and JPEG ext\r\n!mkdir dataset_a\r\n!mkdir dataset_b\r\n!wget https:\/\/upload.wikimedia.org\/wikipedia\/commons\/7\/71\/Dsc_%28179253513%29.jpeg -O example_img.jpeg\r\n!cp example_img.jpeg .\/dataset_a\/\r\n!mv example_img.jpeg .\/dataset_b\/example_img.JPEG\r\n\r\nfrom datasets import load_dataset\r\n\r\n# working\r\ndf1 = load_dataset(\".\/dataset_a\", ignore_verifications=True)\r\n\r\n#not working\r\ndf2 = load_dataset(\".\/dataset_b\", ignore_verifications=True)\r\n\r\n# show\r\nprint(df1, df2)\r\n```\r\n\r\n## Expected results\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 1\r\n })\r\n}) DatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 1\r\n })\r\n})\r\n```\r\n\r\n## Actual results\r\n```\r\nFileNotFoundError: Unable to resolve any data file that matches '['**']' at \/..PATH..\/dataset_b with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nI know that it can be annoying to allow seemingly arbitrary numbers of file extensions. But I think this one would be really welcome.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4514\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4514\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4513","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4513\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4513\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4513\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4513","id":1273450338,"node_id":"PR_kwDODunzps45xTqv","number":4513,"title":"Update Google Cloud Storage documentation and add Azure Blob Storage example","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hi @stevhliu, I've kept the `>>>` before all the in-line code comments as it was done like that in the default S3 example that was already there, I assume that it's done like that just for readiness, let me know whether we should remove the `>>>` in the Python blocks before the in-line code comments or keep them.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/36760800\/174254663-b68d28d2-eae1-40f3-8695-dc4b0c3b479a.png)\r\n","Comments are ignored by doctest, so I think we can remove the `>>>` :)","Cool I'll remove those now \ud83d\udc4d\ud83c\udffb","Sure @lhoestq, I just kept that structure as that was the more similar one to the one that was already there, but we can go with that approach, just let me know whether I should change the headers so as to leave all those providers in the same level (`h2`). Thanks!"],"created_at":1655379969000,"updated_at":1656003911000,"closed_at":1656003299000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"While I was going through the \ud83e\udd17 Datasets documentation of the Cloud storage filesystems at https:\/\/huggingface.co\/docs\/datasets\/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says \"Load your dataset\" when the actual call was to \"Save your dataset\", in-line code comment was mentioning \"s3 bucket\" instead of \"gcs bucket\", and some more in-line comments could be included.\r\n\r\nAlso, I think that mixing Google Cloud Storage documentation with AWS S3's one was a little bit confusing, so I moved all those to the end of the document under an h2 tab named \"Other filesystems\", with an h3 for \"Google Cloud Storage\".\r\n\r\nBesides that, I was currently working with Azure Blob Storage and found out that the URL to [adlfs](https:\/\/github.com\/fsspec\/adlfs) was common for both filesystems Azure Blob Storage and Azure DataLake Storage, as well as the URL, which was updated even though the redirect was working fine, so I decided to group those under the same row in the column of supported filesystems.\r\n\r\nAnd took also the change to add a small documentation entry as for Google Cloud Storage but for Azure Blob Storage, as I assume that AWS S3, GCP Cloud Storage, and Azure Blob Storage, are the most used cloud storage providers.\r\n\r\nLet me know if you're OK with these changes, or whether you want me to roll back some of those! :hugs:","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4513\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4513\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4513","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4513","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4513.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4513.patch","merged_at":1656003299000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4512","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4512\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4512\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4512\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4512","id":1273378129,"node_id":"PR_kwDODunzps45xEDN","number":4512,"title":"Add links to vision tasks scripts in ADD_NEW_DATASET template","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The CI failure is unrelated to the PR's changes. Merging."],"created_at":1655375735000,"updated_at":1657289270000,"closed_at":1657288583000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add links to vision dataset scripts in the ADD_NEW_DATASET template. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4512\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4512\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4512","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4512","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4512.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4512.patch","merged_at":1657288583000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4511","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4511\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4511\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4511\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4511","id":1273336874,"node_id":"PR_kwDODunzps45w7RN","number":4511,"title":"Support all negative values in ClassLabel","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks for this fix! I'm not sure what the release timeline is, but FYI #4508 is a breaking issue for transformer token classification using Trainer and PyTorch. PyTorch defaults to -100 as the ignored label for [negative log loss](https:\/\/pytorch.org\/docs\/stable\/generated\/torch.nn.NLLLoss.html?highlight=nllloss#torch.nn.NLLLoss), so switching labels to -1 leads to index errors using Trainer defaults.\r\n\r\nAs a workaround, I'm using master branch directly (`pip install git+https:\/\/github.com\/huggingface\/datasets.git@master` for anyone who needs to do the same) until this gets released.","The new release `2.4` fixes the issue, feel free to update `datasets` :) \r\n```\r\npip install -U datasets\r\n```"],"created_at":1655373579000,"updated_at":1659024207000,"closed_at":1655387647000,"author_association":"MEMBER","active_lock_reason":null,"body":"We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4508","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4511\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4511\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4511","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4511","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4511.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4511.patch","merged_at":1655387647000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4510","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4510\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4510\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4510\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4510","id":1273260396,"node_id":"PR_kwDODunzps45wq6o","number":4510,"title":"Add regression test for `ArrowWriter.write_batch` when batch is empty","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","As mentioned by @lhoestq, the current behavior is correct and we should not expect batches with different columns, in that case, the if should fail, as the values of the batch can be empty, but not the actual `batch_examples` value."],"created_at":1655369631000,"updated_at":1655383082000,"closed_at":1655382499000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function (\"Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types.\"), the current if-statement is not handling properly `writer.write_batch({})` as an error is triggered.\r\n\r\nAlso, if we add a regression test in `test_arrow_writer.py::test_write_batch` before applying the fix, the test will fail as when trying to write an empty batch as follows:\r\n\r\n```\r\n=================================================================================== short test summary info ===================================================================================\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[None-None] - ValueError: Schema and number of arrays unequal\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[None-1] - ValueError: Schema and number of arrays unequal\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[None-10] - ValueError: Schema and number of arrays unequal\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[fields1-None] - ValueError: Schema and number of arrays unequal\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[fields1-1] - ValueError: Schema and number of arrays unequal\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[fields1-10] - ValueError: Schema and number of arrays unequal\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[fields2-None] - ValueError: Schema and number of arrays unequal\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[fields2-1] - ValueError: Schema and number of arrays unequal\r\nFAILED tests\/test_arrow_writer.py::test_write_batch[fields2-10] - ValueError: Schema and number of arrays unequal\r\n======================================================================== 9 failed, 73 deselected, 7 warnings in 0.81s =========================================================================\r\n```\r\n\r\nSo the batch is not ignored when empty, as `batch_examples={}` won't match the condition `if batch_examples: ...`.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4510\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4510\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4510","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4510","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4510.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4510.patch","merged_at":1655382499000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4509","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4509\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4509\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4509\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4509","id":1273227760,"node_id":"PR_kwDODunzps45wkDl","number":4509,"title":"Support skipping Parquet to Arrow conversion when using Beam","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4509). All of your documentation changes will be reflected on that endpoint.","When #4724 is merged, we can just pass `file_format=\"parquet\"` to `download_and_prepare` and it will output parquet fiels without converting to arrow"],"created_at":1655367938000,"updated_at":1660556571000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4509\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4509\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4509","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4509","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4509.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4509.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4508","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4508\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4508\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4508\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4508","id":1272718921,"node_id":"I_kwDODunzps5L3CZJ","number":4508,"title":"cast_storage method from datasets.features","user":{"login":"romainremyb","id":67968596,"node_id":"MDQ6VXNlcjY3OTY4NTk2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/67968596?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/romainremyb","html_url":"https:\/\/github.com\/romainremyb","followers_url":"https:\/\/api.github.com\/users\/romainremyb\/followers","following_url":"https:\/\/api.github.com\/users\/romainremyb\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/romainremyb\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/romainremyb\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/romainremyb\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/romainremyb\/orgs","repos_url":"https:\/\/api.github.com\/users\/romainremyb\/repos","events_url":"https:\/\/api.github.com\/users\/romainremyb\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/romainremyb\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi! We've recently added a check to the `ClassLabel` type to ensure the values are in the valid label range `-1, 0, ..., num_classes-1` (-1 is used for missing values). The error in your case happens only if the `labels` column is of type `Sequence(ClassLabel(...))` before the `map` call and can be avoided by calling `dataset = dataset.cast_column(\"labels\", Sequence(Value(\"int\")))` beforehand. The token-classification examples in Transformers introduce a new `labels` column, so their type is also `Sequence(Value(\"int\"))`, which doesn't lead to an error as this type unbounded. ","I'm fine with re-adding support for all negative values for unknown\/missing labels @mariosasko, wdyt ?"],"created_at":1655326042000,"updated_at":1655387647000,"closed_at":1655387647000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nA bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets.\r\n\r\n## Steps to reproduce the bug\r\nSteps are:\r\n- load whatever datset\r\n- write a preprocessing function such as \"tokenize_and_align_labels\" written in https:\/\/huggingface.co\/docs\/transformers\/tasks\/token_classification\r\n- map the function on dataset and get \"ValueError: Class label -100 less than -1\" from cast_storage method from datasets.features\r\n\r\n# Sample code to reproduce the bug\r\ndef tokenize_and_align_labels(examples):\r\n tokenized_inputs = tokenizer(examples[\"tokens\"], truncation=True, is_split_into_words=True, max_length=38,padding=\"max_length\")\r\n\r\n labels = []\r\n for i, label in enumerate(examples[f\"labels\"]):\r\n word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.\r\n previous_word_idx = None\r\n label_ids = []\r\n for word_idx in word_ids: # Set the special tokens to -100.\r\n if word_idx is None:\r\n label_ids.append(-100)\r\n elif word_idx != previous_word_idx: # Only label the first token of a given word.\r\n label_ids.append(label[word_idx])\r\n else:\r\n label_ids.append(-100)\r\n previous_word_idx = word_idx\r\n labels.append(label_ids)\r\n\r\n tokenized_inputs[\"labels\"] = labels\r\n return tokenized_inputs\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\r\ndt = dataset.map(tokenize_and_align_labels, batched=True)\r\n\r\n## Expected results\r\nNew dataset objects should load and do on older versions.\r\n\r\n## Actual results\r\n\"ValueError: Class label -100 less than -1\" from cast_storage method from datasets.features\r\n\r\n## Environment info\r\neverything works fine on older installations of datasets\/transformers\r\n\r\nIssue arises when installing datasets on google collab under python3.7\r\nI can't manage to find the exact output you're requirering but version printed is datasets-2.3.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4508\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4508\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4507","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4507\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4507\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4507\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4507","id":1272615932,"node_id":"I_kwDODunzps5L2pP8","number":4507,"title":"How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script","user":{"login":"liyucheng09","id":27999909,"node_id":"MDQ6VXNlcjI3OTk5OTA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27999909?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/liyucheng09","html_url":"https:\/\/github.com\/liyucheng09","followers_url":"https:\/\/api.github.com\/users\/liyucheng09\/followers","following_url":"https:\/\/api.github.com\/users\/liyucheng09\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/liyucheng09\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/liyucheng09\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/liyucheng09\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/liyucheng09\/orgs","repos_url":"https:\/\/api.github.com\/users\/liyucheng09\/repos","events_url":"https:\/\/api.github.com\/users\/liyucheng09\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/liyucheng09\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @liyucheng09.\r\n\r\nUsers can pass the `split` parameter to `load_dataset`. For example, if your split name is \"train\",\r\n```python\r\nds = load_dataset(\"dataset_name\", split=\"train\")\r\n```\r\nwill return a `Dataset` instance.","@albertvillanova Thanks! I can't believe I didn't know this feature till now."],"created_at":1655319394000,"updated_at":1655376008000,"closed_at":1655376008000,"author_association":"NONE","active_lock_reason":null,"body":"If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.\r\n\r\nOr I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`?\r\n\r\nMany thanks for any help.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4507\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4507\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4506","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4506\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4506\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4506\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4506","id":1272516895,"node_id":"I_kwDODunzps5L2REf","number":4506,"title":"Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results","user":{"login":"DrMatters","id":22641583,"node_id":"MDQ6VXNlcjIyNjQxNTgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22641583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DrMatters","html_url":"https:\/\/github.com\/DrMatters","followers_url":"https:\/\/api.github.com\/users\/DrMatters\/followers","following_url":"https:\/\/api.github.com\/users\/DrMatters\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DrMatters\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DrMatters\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DrMatters\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DrMatters\/orgs","repos_url":"https:\/\/api.github.com\/users\/DrMatters\/repos","events_url":"https:\/\/api.github.com\/users\/DrMatters\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DrMatters\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping function is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`","@lhoestq\r\nseems like quite critical stuff for me, if I'm not making a mistake","Hi ! Thanks for reporting. This bug seems to appear in python 3.9 using dill 3.5.1\r\n\r\nAs a workaround you can use an older version of dill:\r\n```\r\npip install \"dill<0.3.5\"\r\n```","installing `dill<0.3.5` after installing `datasets` by pip results in dependency conflict with the version required for `multiprocess`. It can be solved by installing `pip install datasets \"dill<0.3.5\"` (simultaneously) on a clean environment","This has been fixed in https:\/\/github.com\/huggingface\/datasets\/pull\/4516, we will do a new release soon to include the fix :)"],"created_at":1655313091000,"updated_at":1656422629000,"closed_at":1656422585000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nSometimes I get messages about not being able to hash a method:\r\n`Parameter 'function'= of the transform datasets.arrow_dataset.Dataset.\r\n_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`\r\nWhilst the function looks like this:\r\n```python\r\n@staticmethod\r\ndef _separate_speaker_id_from_dialogue(example: arrow_dataset.Example):\r\n speaker_id, dialogue = tuple(zip(*(example[\"dialogue\"])))\r\n example[\"speaker_id\"] = speaker_id\r\n example[\"dialogue\"] = dialogue\r\n return example\r\n```\r\nThis is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step.\r\nThis error is sometimes causing a failure to use cached data, instead of re-running all steps again.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport copy\r\nimport datasets\r\nfrom datasets import arrow_dataset\r\n\r\ndef main():\r\n dataset = datasets.load_dataset(\"blended_skill_talk\")\r\n res = dataset.map(method)\r\n print(res)\r\n\r\ndef method(example: arrow_dataset.Example):\r\n example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance'])\r\n return example\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\nRun with:\r\n```\r\npython -m reproduce_error\r\n```\r\n\r\n## Expected results\r\nDataset is mapped and cached correctly.\r\n\r\n## Actual results\r\nThe code outputs this at some point:\r\n`Parameter 'function'= of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`\r\n\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform: Ubuntu 20.04.3\r\n- Python version: 3.9.12\r\n- PyArrow version: 8.0.0\r\n- Datasets version: 2.3.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4506\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4506\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4505","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4505\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4505\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4505\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4505","id":1272477226,"node_id":"PR_kwDODunzps45uH-o","number":4505,"title":"Fix double dots in data files","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)"],"created_at":1655310664000,"updated_at":1655313358000,"closed_at":1655312753000,"author_association":"MEMBER","active_lock_reason":null,"body":"As mentioned in https:\/\/github.com\/huggingface\/transformers\/pull\/17715 `data_files` can't find a file if the path contains double dots `\/..\/`. This has been introduced in https:\/\/github.com\/huggingface\/datasets\/pull\/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot)\r\n\r\nI fixed this and added a test\r\n\r\ncc @sgugger @ydshieh ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4505\/reactions","total_count":3,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":3,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4505\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4505","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4505","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4505.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4505.patch","merged_at":1655312753000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4504","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4504\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4504\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4504\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4504","id":1272418480,"node_id":"I_kwDODunzps5L15Cw","number":4504,"title":"Can you please add the Stanford dog dataset?","user":{"login":"dgrnd4","id":69434832,"node_id":"MDQ6VXNlcjY5NDM0ODMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/69434832?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dgrnd4","html_url":"https:\/\/github.com\/dgrnd4","followers_url":"https:\/\/api.github.com\/users\/dgrnd4\/followers","following_url":"https:\/\/api.github.com\/users\/dgrnd4\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dgrnd4\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dgrnd4\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dgrnd4\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dgrnd4\/orgs","repos_url":"https:\/\/api.github.com\/users\/dgrnd4\/repos","events_url":"https:\/\/api.github.com\/users\/dgrnd4\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dgrnd4\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"},{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":{"login":"khushmeeet","id":8711912,"node_id":"MDQ6VXNlcjg3MTE5MTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8711912?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/khushmeeet","html_url":"https:\/\/github.com\/khushmeeet","followers_url":"https:\/\/api.github.com\/users\/khushmeeet\/followers","following_url":"https:\/\/api.github.com\/users\/khushmeeet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/khushmeeet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/khushmeeet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/khushmeeet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/khushmeeet\/orgs","repos_url":"https:\/\/api.github.com\/users\/khushmeeet\/repos","events_url":"https:\/\/api.github.com\/users\/khushmeeet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/khushmeeet\/received_events","type":"User","site_admin":false},"assignees":[{"login":"khushmeeet","id":8711912,"node_id":"MDQ6VXNlcjg3MTE5MTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8711912?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/khushmeeet","html_url":"https:\/\/github.com\/khushmeeet","followers_url":"https:\/\/api.github.com\/users\/khushmeeet\/followers","following_url":"https:\/\/api.github.com\/users\/khushmeeet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/khushmeeet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/khushmeeet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/khushmeeet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/khushmeeet\/orgs","repos_url":"https:\/\/api.github.com\/users\/khushmeeet\/repos","events_url":"https:\/\/api.github.com\/users\/khushmeeet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/khushmeeet\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)","@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wrote on the post)\r\n","Hi! The [ADD NEW DATASET](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md) instructions are indeed the best place to start. It's also perfectly fine to add a dataset if it's public, even if it's not yours. Let me know if you need some additional pointers.","If no one is working on this, I could take this up!","@khushmeeet this is the [link](https:\/\/huggingface.co\/datasets\/dgrnd4\/stanford_dog_dataset) where I added the dataset already. If you can I would ask you to do this:\r\n1) The dataset it's all in TRAINING SET: can you please divide it in Training,Test and Validation Set? If you can for each class, take the 80% for the Training set and the 10% for Test and 10% Validation\r\n2) The images has different size, can you please resize all the images in 224,224,3? Look even at the last dimension \"3\" because some images has dimension 4!\r\n\r\nThank you!!","Hi @khushmeeet! Thanks for the interest. You can self-assign the issue by commenting `#self-assign` on it. \r\n\r\nAlso, I think we can skip @dgrnd4's steps as we try to avoid any custom processing on top of raw data. One can later copy the script and override `_post_process` in it to perform such processing on the generated dataset.","Thanks @mariosasko \r\n\r\n@dgrnd4 As dataset is there on Hub, and preprocessing is not recommended. I am not sure if there is any other task to do. However, I can't seem to find relevant `.py` files for this dataset in GitHub repo.","@khushmeeet @mariosasko The point is that the images must be processed and must have the same size in order to can be used for things for example \"Training\". ","@dgrnd4 Yes, but this can be done after loading (`map` to resize images and `train_test_split` to create extra splits)\r\n\r\n@khushmeeet The linked version is implemented as a no-code dataset and is generated directly from the ZIP archive, but our \"GitHub\" datasets (these are datasets without a user\/org namespace on the Hub) need a generation script, and you can find one [here](https:\/\/github.com\/tensorflow\/datasets\/blob\/master\/tensorflow_datasets\/image_classification\/stanford_dogs.py). `datasets` started as a fork of TFDS, so we share similar script structure, which makes it trivial to adapt it.","@mariosasko The point is that if I use something like this:\r\nx_train, x_test = train_test_split(dataset, test_size=0.1) \r\n\r\nto get Train 90% and Test 10%, and then to get the Validation Set (10% of the whole 100%):\r\n\r\n```\r\ntrain_ratio = 0.80\r\nvalidation_ratio = 0.10\r\ntest_ratio = 0.10\r\n\r\nx_train, x_test, y_train, y_test = train_test_split(dataX, dataY, test_size=1 - train_ratio)\r\nx_val, x_test, y_val, y_test = train_test_split(x_test, y_test, test_size=test_ratio\/(test_ratio + validation_ratio)) \r\n\r\n```\r\n\r\nThe point is that the structure of the data is:\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 20580\r\n })\r\n})\r\n\r\n```\r\n\r\nSo how to extract images and labels?\r\n\r\nEDIT --> Split of the dataset in Train-Test-Validation:\r\n```\r\nimport datasets\r\nfrom datasets.dataset_dict import DatasetDict\r\nfrom datasets import Dataset\r\n\r\npercentage_divison_test = int(len(dataset['train'])\/100 *10) # 10% --> 2058 \r\npercentage_divison_validation = int(len(dataset['train'])\/100 *20) # 20% --> 4116\r\n\r\ndataset_ = datasets.DatasetDict({\"train\": Dataset.from_dict({\r\n\r\n 'image': dataset['train'][0 : len(dataset['train']) ]['image'], \r\n 'labels': dataset['train'][0 : len(dataset['train']) ]['label'] }), \r\n \r\n \"test\": Dataset.from_dict({ #20580-4116 (validation) ,20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['label'] }), \r\n \r\n \"validation\": Dataset.from_dict({ # 20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['label'] }), \r\n })\r\n```","@mariosasko in order to resize images I'm trying this method: \r\n```\r\nfor i in range(0,len(dataset['train'])): #len(dataset['train'])\r\n\r\n ex = dataset['train'][i] #i\r\n image = ex['image']\r\n image = image.convert(\"RGB\") # \r\n image_resized = image.resize(size_to_resize) # \r\n\r\n dataset['train'][i]['image'] = image_resized \r\n```\r\n\r\nBecause the DatasetDict is formed by arrows that are immutable, the changing assignment in the last line of code, doesn't work!\r\nDo you have any idea in order to get a valid result?","#self-assign","I have raised PR for adding stanford-dog dataset. I have not added any data preprocessing code. Only dataset generation script is there. Let me know any changes required, or anything to add to README."],"created_at":1655307575000,"updated_at":1657342067000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *Stanford dog dataset*\r\n- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http:\/\/vision.stanford.edu\/aditya86\/ImageNetDogs\/*\r\n- **Paper:** *http:\/\/vision.stanford.edu\/aditya86\/ImageNetDogs\/*\r\n- **Data:** *[link to the Github repository or current dataset location](http:\/\/vision.stanford.edu\/aditya86\/ImageNetDogs\/)*\r\n- **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose *\r\n\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4504\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4504\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4503","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4503\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4503\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4503\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4503","id":1272367055,"node_id":"PR_kwDODunzps45twLR","number":4503,"title":"Refactor and add metadata to fever dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","But this is somehow fever v3 dataset (see this link https:\/\/fever.ai\/ under the dropdown menu called Datasets). Our fever dataset already contains v1 and v2 configs. Then, I added this as if v3 config (but named feverous instead of v3 to align with the original naming by data owners).","In any case, if you really think this should be a new dataset, then I would propose to create it on the Hub instead, as \"fever\/feverous\".","> In any case, if you really think this should be a new dataset, then I would propose to create it on the Hub instead, as \"fever\/feverous\".\r\n\r\nYea makes sense ! thanks :) let's push more datasets on the hub rather than on github from now on","I have added \"feverous\" dataset to the Hub: https:\/\/huggingface.co\/datasets\/fever\/feverous\r\n\r\nI change the name of this PR accordingly, as now it only:\r\n- Refactors code and include for both Fever v1.0 and v2.0 specific:\r\n - Descriptions\r\n - Citations\r\n - Homepages\r\n- Updates documentation card aligned with above:\r\n - It was missing v2.0 description and citation.\r\n- Update metadata JSON"],"created_at":1655305187000,"updated_at":1657108455000,"closed_at":1657107690000,"author_association":"MEMBER","active_lock_reason":null,"body":"Related to: #4452 and #3792.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4503\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4503\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4503","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4503","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4503.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4503.patch","merged_at":1657107690000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4502","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4502\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4502\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4502\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4502","id":1272353700,"node_id":"I_kwDODunzps5L1pOk","number":4502,"title":"Logic bug in arrow_writer?","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.","Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.","> Hi @alvarobartt , Thanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.\r\n\r\nSo it depends on how you're actually chunking the data as if you're not handling empty chunks `batch_examples={}` or `batch_examples=None`, you may end up running into this issue. So you could check the chunks before you actually call `ArrowWriter.write_batch`, but anyway the fix you proposed I think improves the logic of `write_batch` to avoid running into these issues.","Thanks, I added a if-print and I found it does return an empty examples in the chunking function that is passed to `.map()`.","Hi ! We consider an empty batch to look like this:\r\n```python\r\nempty_batch = {\r\n \"column_1\": [],\r\n \"column_2\": [],\r\n ...\r\n}\r\n```\r\n\r\nWhile `{}` corresponds to a batch with no columns.\r\n\r\nTherefore calling this code should fail, because the two batches don't have the same columns:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({})\r\n```\r\n\r\nIf you want to write an empty batch, you should do this instead:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({\"a\": []})\r\n```","Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using `if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...`?\r\n\r\nUpdating the regressions tests with an empty batch formatted as `{\"col_1\": [], \"col_2\": []}` instead of `{}` works fine with the current if, and also with the one proposed by @cccntu.","> Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...?\r\n\r\nThere's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for `{}` here\r\n\r\nIn particular the check `if not batch_examples or len(next(iter(batch_examples.values()))) == 0:` doesn't raise an error while it should, that why the old `if` is fine IMO\r\n\r\n> Updating the regressions tests with an empty batch formatted as {\"col_1\": [], \"col_2\": []} instead of {} works fine with the current if, and also with the one proposed by @cccntu.\r\n\r\nCool ! If you want you can update your PR to add the regression tests, to make sure that `{\"col_1\": [], \"col_2\": []}` works but not `{}`","Great thanks for the response! So I'll just add that regression test and remove the current if-statement.","Hi @lhoestq ,\r\n\r\nThanks for your explanation. Now I get it that `{}` means the columns are different. But wouldn't it be nice if the code can ignore it, like it ignores `{\"a\": []}`?\r\n\r\n\r\n--- \r\nBTW, \r\n> There's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for {} here\r\n\r\nI remember the error happens around here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/88a902d6474fae8d793542d57a4f3b0d187f3c5b\/src\/datasets\/arrow_writer.py#L506-L507\r\nThe error says something like `arrays` and `schema` doesn't have the same length. And it's not very clear I passed a `{}`.\r\n\r\nedit: actual error message\r\n```\r\nFile \"site-packages\/datasets\/arrow_writer.py\", line 595, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow\/table.pxi\", line 3557, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow\/table.pxi\", line 1401, in pyarrow.lib._sanitize_arrays\r\nValueError: Schema and number of arrays unequal\r\n```","> But wouldn't it be nice if the code can ignore it, like it ignores {\"a\": []}?\r\n\r\nI think it would make things confusing because it doesn't follow our definition of a batch: \"the columns of a batch = the keys of the dict\". It would probably break certain behaviors as well. For example if you remove all the columns of a dataset (using `.remove_colums(...)` or `.map(..., remove_columns=...)`), the writer has to write 0 columns, and currently the only way to tell the writer to do so using `write_batch` is to pass `{}`.\r\n\r\n> The error says something like arrays and schema doesn't have the same length. And it's not very clear I passed a {}.\r\n\r\nYea the message can actually be improved indeed, it's definitely not clear. Maybe we can add a line right before the call `pa.Table.from_arrays` to make sure the keys of the batch match the field names of the schema"],"created_at":1655304600000,"updated_at":1655565351000,"closed_at":1655565351000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"https:\/\/github.com\/huggingface\/datasets\/blob\/88a902d6474fae8d793542d57a4f3b0d187f3c5b\/src\/datasets\/arrow_writer.py#L475-L488\r\n\r\nI got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:\r\n```\r\n- if batch_examples and len(next(iter(batch_examples.values()))) == 0:\r\n+ if not batch_examples or len(next(iter(batch_examples.values()))) == 0:\r\n return\r\n```\r\n@lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4502\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4502\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4501","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4501\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4501\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4501\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4501","id":1272300646,"node_id":"PR_kwDODunzps45th2M","number":4501,"title":"Corrected broken links in doc","user":{"login":"clefourrier","id":22726840,"node_id":"MDQ6VXNlcjIyNzI2ODQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22726840?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/clefourrier","html_url":"https:\/\/github.com\/clefourrier","followers_url":"https:\/\/api.github.com\/users\/clefourrier\/followers","following_url":"https:\/\/api.github.com\/users\/clefourrier\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/clefourrier\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/clefourrier\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/clefourrier\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/clefourrier\/orgs","repos_url":"https:\/\/api.github.com\/users\/clefourrier\/repos","events_url":"https:\/\/api.github.com\/users\/clefourrier\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/clefourrier\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655302337000,"updated_at":1655305865000,"closed_at":1655305256000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4501\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4501\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4501","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4501","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4501.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4501.patch","merged_at":1655305256000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4500","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4500\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4500\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4500\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4500","id":1272281992,"node_id":"PR_kwDODunzps45tdxk","number":4500,"title":"Add `concatenate_datasets` for iterable datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks ! I addressed your comments :)\r\n\r\n> There is a slight difference in concatenate_datasets between the version for map-style datasets and the one for iterable datasets\r\n\r\nIndeed, here is what I did to fix this:\r\n\r\n- axis 0: fill missing columns with None.\r\n(I first iterate over the input datasets to infer their columns from the first examples, then I set the features of the resulting dataset to be the merged features)\r\nThis is consistent with non-streaming concatenation\r\n\r\n- axis 1: **fill the missing rows with None**, for consistency with axis 0\r\n(but let me know what you think, I can still revert this behavior and raise an error when one of the dataset runs out of examples)\r\nWe might have to align the non-streaming concatenation with this behavior though, for consistency. What do you think ?","Added more comments as suggested, and some typing\r\n\r\nWhile factorizing _apply_features_types for both IterableDataset and TypedExamplesIterable, I fixed a missing `token_per_repo_id` that was not passed to TypedExamplesIteable\r\n\r\nLet me know what you think now @mariosasko "],"created_at":1655301530000,"updated_at":1656451539000,"closed_at":1656450904000,"author_association":"MEMBER","active_lock_reason":null,"body":"`concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets`\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2564\r\n\r\nI also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on the `Dataset` object internals\r\n\r\nAnd I moved `concatenate_datasets` from arrow_dataset.py to combine.py to have it with `interleave_datasets` (though it's also copied in arrow_dataset module for backward compatibility for now)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4500\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4500\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4500","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4500","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4500.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4500.patch","merged_at":1656450904000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4499","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4499\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4499\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4499\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4499","id":1272118162,"node_id":"PR_kwDODunzps45s6Jh","number":4499,"title":"fix ETT m1\/m2 test\/val dataset","user":{"login":"kashif","id":8100,"node_id":"MDQ6VXNlcjgxMDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8100?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kashif","html_url":"https:\/\/github.com\/kashif","followers_url":"https:\/\/api.github.com\/users\/kashif\/followers","following_url":"https:\/\/api.github.com\/users\/kashif\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kashif\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kashif\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kashif\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kashif\/orgs","repos_url":"https:\/\/api.github.com\/users\/kashif\/repos","events_url":"https:\/\/api.github.com\/users\/kashif\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kashif\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thansk for the fix ! Can you regenerate the datasets_infos.json please ? This way it will update the expected number of examples in the test and val splits","ah yes!"],"created_at":1655293862000,"updated_at":1655304956000,"closed_at":1655304313000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"https:\/\/huggingface.co\/datasets\/ett\/discussions\/1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4499\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4499\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4499","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4499","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4499.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4499.patch","merged_at":1655304312000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4498","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4498\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4498\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4498\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4498","id":1272100549,"node_id":"I_kwDODunzps5L0rbF","number":4498,"title":"WER and CER > 1","user":{"login":"sadrasabouri","id":43045767,"node_id":"MDQ6VXNlcjQzMDQ1NzY3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43045767?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sadrasabouri","html_url":"https:\/\/github.com\/sadrasabouri","followers_url":"https:\/\/api.github.com\/users\/sadrasabouri\/followers","following_url":"https:\/\/api.github.com\/users\/sadrasabouri\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sadrasabouri\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sadrasabouri\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sadrasabouri\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sadrasabouri\/orgs","repos_url":"https:\/\/api.github.com\/users\/sadrasabouri\/repos","events_url":"https:\/\/api.github.com\/users\/sadrasabouri\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sadrasabouri\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https:\/\/en.wikipedia.org\/wiki\/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0"],"created_at":1655292912000,"updated_at":1655311085000,"closed_at":1655311085000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nIt seems that in some cases in which the `prediction` is longer than the `reference` we may have word\/character error rate higher than 1 which is a bit odd.\r\n\r\nIf it's a real bug I think I can solve it with a PR changing [this](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/metrics\/wer\/wer.py#L105) line to\r\n```python\r\nreturn min(incorrect \/ total, 1.0)\r\n```\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_metric\r\nwer = load_metric(\"wer\")\r\nwer_value = wer.compute(predictions=[\"Hi World vka\"], references=[\"Hello\"])\r\nprint(wer_value)\r\n```\r\n\r\n## Expected results\r\n```\r\n1.0\r\n```\r\n\r\n## Actual results\r\n```\r\n3.0\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.3.0\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4498\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4498\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4497","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4497\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4497\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4497\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4497","id":1271964338,"node_id":"PR_kwDODunzps45sYns","number":4497,"title":"Re-add download_manager module in utils","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMode = None\r\n```\r\n\r\nIf afterwards we use something like:\r\n```python\r\nif download_mode == DownloadMode.FORCE_REDOWNLOAD\r\n```\r\nthat will raise an exception.","It works fine on my side:\r\n```python\r\n>>> from datasets.utils.download_manager import DownloadMode\r\n>>> DownloadMode is not None\r\nTrue\r\n```","As reported in https:\/\/github.com\/huggingface\/evaluate\/pull\/143\r\n```python\r\nfrom datasets.utils import DownloadConfig\r\n```\r\nis also missing, I'm re-adding it","Took the liberty of merging this one, to do a patch release soon. If we think of a better approach we can improve it later"],"created_at":1655286273000,"updated_at":1655289208000,"closed_at":1655288624000,"author_association":"MEMBER","active_lock_reason":null,"body":"https:\/\/github.com\/huggingface\/datasets\/pull\/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager`\r\n\r\nThis breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager` \r\n\r\nThis PR re-adds `datasets.utils.download_manager` without circular imports.\r\n\r\nWe could also show a message that says that accessing it is deprecated, but I think we can do this in a subsequent PR, and just focus on doing a patch release for now","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4497\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4497\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4497","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4497","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4497.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4497.patch","merged_at":1655288624000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4496","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4496\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4496\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4496\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4496","id":1271945704,"node_id":"PR_kwDODunzps45sUnW","number":4496,"title":"Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!"],"created_at":1655285356000,"updated_at":1657213611000,"closed_at":1657212948000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4496\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4496\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4496","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4496","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4496.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4496.patch","merged_at":1657212948000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4495","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4495\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4495\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4495\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4495","id":1271851025,"node_id":"PR_kwDODunzps45sAgO","number":4495,"title":"Fix patching module that doesn't exist","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655281070000,"updated_at":1655311249000,"closed_at":1655283249000,"author_association":"MEMBER","active_lock_reason":null,"body":"Reported in https:\/\/github.com\/huggingface\/huggingface_hub\/runs\/6894703718?check_suite_focus=true\r\n\r\nWhen trying to patch `scipy.io.loadmat`:\r\n\r\n```python\r\nModuleNotFoundError: No module named 'scipy'\r\n```\r\n\r\nInstead it shouldn't raise an error and do nothing\r\n\r\nBug introduced by #4375\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4494","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4495\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4495\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4495","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4495","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4495.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4495.patch","merged_at":1655283249000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4494","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4494\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4494\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4494\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4494","id":1271850599,"node_id":"I_kwDODunzps5LzuZn","number":4494,"title":"Patching fails for modules that are not installed or don't exist","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1655281049000,"updated_at":1655283249000,"closed_at":1655283249000,"author_association":"MEMBER","active_lock_reason":null,"body":"Reported in https:\/\/github.com\/huggingface\/huggingface_hub\/runs\/6894703718?check_suite_focus=true\r\n\r\nWhen trying to patch `scipy.io.loadmat`:\r\n\r\n```python\r\nModuleNotFoundError: No module named 'scipy'\r\n```\r\n\r\nInstead it shouldn't raise an error and do nothing\r\n\r\nWe use patching to extend such functions to support remote URLs and work in streaming mode","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4494\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4494\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4493","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4493\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4493\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4493\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4493","id":1271306385,"node_id":"PR_kwDODunzps45qL7J","number":4493,"title":"Add `@transmit_format` in `flatten`","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@mariosasko please let me know whether we need to include some sort of tests to make sure that the decorator is working as expected. Thanks! \ud83e\udd17 ","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4493). All of your documentation changes will be reflected on that endpoint.","Hi, thanks for working on this! Yes, please add (simple) tests so we can avoid any unexpected behavior in the future.\r\n\r\n`@transmit_format` doesn't handle column renaming, so I removed it from `rename_column` and `rename_columns` and added a comment to explain this."],"created_at":1655237349000,"updated_at":1658485736000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"As suggested by @mariosasko in https:\/\/github.com\/huggingface\/datasets\/pull\/4411, we should include the `@transmit_format` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated.\r\n\r\n**Edit**: according to @mariosasko comment below, the decorator `@transmit_format` doesn't handle column renaming, so it's done manually for those instead.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4493\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4493\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4493","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4493","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4493.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4493.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4492","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4492\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4492\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4492\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4492","id":1271112497,"node_id":"PR_kwDODunzps45pktu","number":4492,"title":"Pin the revision in imagenet download links","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655226917000,"updated_at":1655228113000,"closed_at":1655227545000,"author_association":"MEMBER","active_lock_reason":null,"body":"Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism.\r\n\r\ncc @mariosasko ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4492\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4492\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4492","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4492","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4492.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4492.patch","merged_at":1655227545000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4491","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4491\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4491\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4491\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4491","id":1270803822,"node_id":"I_kwDODunzps5Lvu1u","number":4491,"title":"Dataset Viewer issue for Pavithree\/test","user":{"login":"Pavithree","id":23344465,"node_id":"MDQ6VXNlcjIzMzQ0NDY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23344465?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Pavithree","html_url":"https:\/\/github.com\/Pavithree","followers_url":"https:\/\/api.github.com\/users\/Pavithree\/followers","following_url":"https:\/\/api.github.com\/users\/Pavithree\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Pavithree\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Pavithree\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Pavithree\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Pavithree\/orgs","repos_url":"https:\/\/api.github.com\/users\/Pavithree\/repos","events_url":"https:\/\/api.github.com\/users\/Pavithree\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Pavithree\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["This issue can be resolved according to this post https:\/\/stackoverflow.com\/questions\/70566660\/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset."],"created_at":1655212990000,"updated_at":1655217441000,"closed_at":1655217273000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\r\n\r\nhttps:\/\/huggingface.co\/datasets\/Pavithree\/test\r\n\r\n### Description\r\n\r\nI have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help.\r\n\r\n### Owner\r\n\r\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4491\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4491\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4490","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4490\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4490\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4490\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4490","id":1270719074,"node_id":"I_kwDODunzps5LvaJi","number":4490,"title":"Use `torch.nested_tensor` for arrays of varying length in torch formatter","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1655209180000,"updated_at":1655209180000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`.\r\n\r\nThe PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4490\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4490\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4489","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4489\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4489\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4489\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4489","id":1270706195,"node_id":"PR_kwDODunzps45oONF","number":4489,"title":"Add SV-Ident dataset","user":{"login":"e-tornike","id":20404466,"node_id":"MDQ6VXNlcjIwNDA0NDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20404466?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/e-tornike","html_url":"https:\/\/github.com\/e-tornike","followers_url":"https:\/\/api.github.com\/users\/e-tornike\/followers","following_url":"https:\/\/api.github.com\/users\/e-tornike\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/e-tornike\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/e-tornike\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/e-tornike\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/e-tornike\/orgs","repos_url":"https:\/\/api.github.com\/users\/e-tornike\/repos","events_url":"https:\/\/api.github.com\/users\/e-tornike\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/e-tornike\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @e-tornike, thanks a lot for adding this interesting dataset.\r\n\r\nRecently at Hugging Face, we have decided to give priority to adding datasets directly on the Hub. Would you mind to transfer your loading script to the Hub? You could create a dedicated org namespace, so that your dataset would be accessible using `vadis\/sv_ident` or `sdproc\/sv_ident` or `coling\/sv_ident` (as you prefer).\r\n\r\nYou have an example here: https:\/\/huggingface.co\/datasets\/projecte-aina\/catalan_textual_corpus","Additionally, please feel free to ping us if you need assistance\/help in creating this dataset.\r\n\r\nYou could put the link to your Hub dataset here in this Issue discussion page, so that we can follow the progress. :)","Hi @albertvillanova, thanks for the feedback! Uploading via the Hub is a lot easier! \r\n\r\nI've uploaded the dataset here: https:\/\/huggingface.co\/datasets\/vadis\/sv-ident, but it's reporting a \"Status400Error\". Is there any way to see the logs of the dataset script and what is causing the error?","Hi @e-tornike, good job at https:\/\/huggingface.co\/datasets\/vadis\/sv-ident.\r\n\r\nNormally, you can run locally the loading of the dataset by passing `streaming=True` (as the previewer does):\r\n```python\r\nds = load_dataset(\"path\/to\/sv_ident.py, split=\"train\", streaming=True)\r\nitem = next(iter(ds))\r\nitem\r\n```\r\n\r\nLet me have a look and open a discussion on your Hub repo! ;)","I've opened an Issue: \r\n- #4527 "],"created_at":1655208540000,"updated_at":1655714906000,"closed_at":1655714247000,"author_association":"NONE","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4489\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4489\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4489","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4489","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4489.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4489.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4488","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4488\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4488\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4488\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4488","id":1270613857,"node_id":"PR_kwDODunzps45n6Ja","number":4488,"title":"Update PASS dataset version","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655203634000,"updated_at":1655224915000,"closed_at":1655224348000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Update the PASS dataset to version v3 (the newest one) from the [version history](https:\/\/github.com\/yukimasano\/PASS\/blob\/main\/version_history.txt).\r\n\r\nPS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4488\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4488\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4488","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4488","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4488.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4488.patch","merged_at":1655224348000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4487","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4487\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4487\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4487\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4487","id":1270525163,"node_id":"PR_kwDODunzps45nm5J","number":4487,"title":"Support streaming UDHR dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655199213000,"updated_at":1655269762000,"closed_at":1655269189000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR:\r\n- Adds support for streaming UDHR dataset\r\n- Adds the BCP 47 language code as feature","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4487\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4487\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4487","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4487","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4487.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4487.patch","merged_at":1655269189000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4486","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4486\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4486\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4486\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4486","id":1269518084,"node_id":"PR_kwDODunzps45kP88","number":4486,"title":"Add CCAgT dataset","user":{"login":"johnnv1","id":20444345,"node_id":"MDQ6VXNlcjIwNDQ0MzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20444345?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/johnnv1","html_url":"https:\/\/github.com\/johnnv1","followers_url":"https:\/\/api.github.com\/users\/johnnv1\/followers","following_url":"https:\/\/api.github.com\/users\/johnnv1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/johnnv1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/johnnv1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/johnnv1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/johnnv1\/orgs","repos_url":"https:\/\/api.github.com\/users\/johnnv1\/repos","events_url":"https:\/\/api.github.com\/users\/johnnv1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/johnnv1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hi! Excellent job @johnnv1! There were typos\/missing words in the card, so I took the liberty to rewrite some parts to make them easier to understand. Let me know if you are ok with the changes. Also, feel free to add some info under the `Who are the annotators?` section.\r\n\r\nAdditionally, I fixed the issue with streaming and renamed the `digits` feature to `objects`.\r\n\r\n@lhoestq Are you ok with skipping the dummy data test here as it's tricky to generate it due to the splits separation logic?","I think I can also add instance segmentation: by exposing the segment of each instance, so it will be similar with object detection:\r\n\r\n- `instances`: a dictionary containing bounding boxes, segments, and labels of the cell objects \r\n - `bbox`: a list of bounding boxes\r\n - `segment`: a list of segments in format of `[polygon]`, where each polygon is `[x0, y0, ..., xn, yn]`\r\n - `label`: a list of integers representing the category\r\n\r\nDo you think it would be ok?","Don't you think it makes sense to keep the same category IDs for all approaches? \r\n\r\nNow we have:\r\n - nucleus category ID equals 0 for object detection and instance segmentation\r\n - background category ID equals 0 (on the masks) for semantic segmentation","I find it weird to have a dummy label in object detection just to align the mapping with semantic segmentation. Instead, let's explain in the card (under Data Fields -> annotation) what the pixel values mean (background + object detection labels)","Ok, I can do that in the next few days. I will create a `lapix` organization, and I will add this dataset and also #4565","So, I think we can close this PR? I have already moved these files there.\r\n\r\nThe link of CCAgT dataset is: https:\/\/huggingface.co\/datasets\/lapix\/CCAgT\r\n\r\n\ud83e\udd17 ","Woohoo awesome !\r\n\r\nclosing this PR :)"],"created_at":1655130019000,"updated_at":1656945423000,"closed_at":1656944745000,"author_association":"NONE","active_lock_reason":null,"body":"As described in #4075 \r\n\r\nI could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4486\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4486\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4486","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4486","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4486.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4486.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4485","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4485\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4485\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4485\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4485","id":1269463054,"node_id":"PR_kwDODunzps45kD7A","number":4485,"title":"Fix cast to null","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1655127872000,"updated_at":1655214234000,"closed_at":1655213654000,"author_association":"MEMBER","active_lock_reason":null,"body":"It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type.\r\n\r\nBecause if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type).\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4483","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4485\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4485\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4485","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4485","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4485.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4485.patch","merged_at":1655213654000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4484","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4484\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4484\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4484\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4484","id":1269383811,"node_id":"PR_kwDODunzps45jywZ","number":4484,"title":"Better ImportError message when a dataset script dependency is missing","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Discussed offline with @mariosasko, merging :)","Fwiw, i think this same issue is occurring on the datasets website page, where preview isn't available due to the `bigbench` import error","For the preview of BigBench datasets, we're just waiting for bigbench to have a stable version on PyPI, instead of the one hosted on GCS ;)"],"created_at":1655124277000,"updated_at":1657290644000,"closed_at":1655128247000,"author_association":"MEMBER","active_lock_reason":null,"body":"When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable.\r\n\r\nI improved it from\r\n```\r\nImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install \"bigbench @ https:\/\/storage.googleapis.com\/public_research_data\/bigbench\/bigbench-0.0.1.tar.gz\" bigbench bigbench bigbench' for instance'\r\n```\r\nto\r\n```\r\nImportError: To be able to use bigbench, you need to install the following dependency: bigbench.\r\nPlease install it using 'pip install \"bigbench @ https:\/\/storage.googleapis.com\/public_research_data\/bigbench\/bigbench-0.0.1.tar.gz\"' for instance'\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4484\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4484\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4484","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4484","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4484.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4484.patch","merged_at":1655128247000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4483","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4483\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4483\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4483\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4483","id":1269253840,"node_id":"I_kwDODunzps5Lp0bQ","number":4483,"title":"Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists","user":{"login":"sanderland","id":48946947,"node_id":"MDQ6VXNlcjQ4OTQ2OTQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48946947?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanderland","html_url":"https:\/\/github.com\/sanderland","followers_url":"https:\/\/api.github.com\/users\/sanderland\/followers","following_url":"https:\/\/api.github.com\/users\/sanderland\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanderland\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanderland\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanderland\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanderland\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanderland\/repos","events_url":"https:\/\/api.github.com\/users\/sanderland\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanderland\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @sanderland ! Thanks for reporting :) This is a bug, I opened a PR to fix it. We'll do a new release soon\r\n\r\nIn the meantime you can fix it by specifying in advance that the \"label\" are integers:\r\n```python\r\nimport numpy as np\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"text\": [\"the lazy dog jumps over the quick fox\", \"another sentence\"],\r\n \"label\": [[], []],\r\n }\r\n)\r\n# explicitly say that the \"label\" type is int64, even though it contains only null values\r\nds = ds.cast_column(\"label\", Sequence(Value(\"int64\")))\r\n\r\ndef mapper(features):\r\n features['label'] = [\r\n [0,0,0] for l in features['label']\r\n ]\r\n return features\r\n\r\nds_mapped = ds.map(mapper,batched=True)\r\n```"],"created_at":1655117272000,"updated_at":1655213654000,"closed_at":1655213654000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nDataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'.\r\n\r\nThis appears to be due to the interaction of arrow internals and some assumptions made by datasets.\r\n\r\nThe bug appeared when binarizing some labels, and then adding a dataset which had all these labels absent (to force the model to not label empty strings such with anything)\r\nParticularly the fact that this only happens in batched mode is strange.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport numpy as np\r\nds = Dataset.from_dict(\r\n {\r\n \"text\": [\"the lazy dog jumps over the quick fox\", \"another sentence\"],\r\n \"label\": [[], []],\r\n }\r\n)\r\ndef mapper(features):\r\n features['label'] = [\r\n [0,0,0] for l in features['label']\r\n ]\r\n return features\r\nds_mapped = ds.map(mapper,batched=True)\r\n```\r\n\r\n## Expected results\r\nNot crashing\r\n\r\n## Actual results\r\n```\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2346: in map\r\n return self._map_single(\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:532: in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:499: in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py:458: in wrapper\r\n out = func(self, *args, **kwargs)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2751: in _map_single\r\n writer.write_batch(batch)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py:503: in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\npyarrow\/array.pxi:230: in pyarrow.lib.array\r\n ???\r\npyarrow\/array.pxi:110: in pyarrow.lib._handle_arrow_array_protocol\r\n ???\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py:198: in __arrow_array__\r\n out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/table.py:1675: in wrapper\r\n return func(array, *args, **kwargs)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/table.py:1812: in cast_array_to_feature\r\n casted_values = _c(array.values, feature.feature)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/table.py:1675: in wrapper\r\n return func(array, *args, **kwargs)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/table.py:1843: in cast_array_to_feature\r\n return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/table.py:1675: in wrapper\r\n return func(array, *args, **kwargs)\r\n..\/.venv\/lib\/python3.8\/site-packages\/datasets\/table.py:1752: in array_cast\r\n return array.cast(pa_type)\r\npyarrow\/array.pxi:915: in pyarrow.lib.Array.cast\r\n ???\r\n..\/.venv\/lib\/python3.8\/site-packages\/pyarrow\/compute.py:376: in cast\r\n return call_function(\"cast\", [arr], options)\r\npyarrow\/_compute.pyx:542: in pyarrow._compute.call_function\r\n ???\r\npyarrow\/_compute.pyx:341: in pyarrow._compute.Function.call\r\n ???\r\npyarrow\/error.pxi:144: in pyarrow.lib.pyarrow_internal_check_status\r\n ???\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n> ???\r\nE pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null\r\npyarrow\/error.pxi:121: ArrowNotImplementedError\r\n```\r\n\r\n## Workarounds\r\n\r\n* Not using batched=True\r\n* Using an np.array([],dtype=float) or similar instead of [] in the input\r\n* Naming the output column differently from the input column\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.2.2\r\n- Platform: Ubuntu\r\n- Python version: 3.8\r\n- PyArrow version: 8.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4483\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4483\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4482","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4482\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4482\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4482\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4482","id":1269237447,"node_id":"PR_kwDODunzps45jS_c","number":4482,"title":"Test that TensorFlow is not imported on startup","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4482). All of your documentation changes will be reflected on that endpoint."],"created_at":1655116429000,"updated_at":1657120793000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"TF takes some time to be imported, and also uses some GPU memory.\r\n\r\nI just added a test to make sure that in the future it's never imported by default when\r\n```python\r\nimport datasets\r\n```\r\nis called.\r\n\r\nRight now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` branch)\r\n\r\nI'll mark this PR as ready for review once `huggingface_hub` has a new release","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4482\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4482\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4482","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4482","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4482.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4482.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4481","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4481\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4481\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4481\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4481","id":1269187792,"node_id":"PR_kwDODunzps45jIRi","number":4481,"title":"Fix iwslt2017","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","CI fails are just abut missing tags in the dataset card, merging !"],"created_at":1655113881000,"updated_at":1655117397000,"closed_at":1655116818000,"author_association":"MEMBER","active_lock_reason":null,"body":"The files were moved to google drive, I hosted them on the Hub instead (ok according to the license)\r\n\r\nI also updated the `datasets_infos.json`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4481\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4481\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4481","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4481","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4481.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4481.patch","merged_at":1655116818000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4480","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4480\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4480\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4480\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4480","id":1268921567,"node_id":"I_kwDODunzps5LojTf","number":4480,"title":"Bigbench tensorflow GPU dependency","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting ! :) cc @andersjohanandreassen can you take a look at this ?\r\n\r\nAlso @cceyda feel free to open an issue at [BIG-Bench](https:\/\/github.com\/google\/BIG-bench) as well regarding the `AttributeError`","I'm on vacation for the next week, so won't be able to do much debugging at the moment. Sorry for the inconvenience.\r\nBut I did quickly take a look:\r\n\r\n**pypi**:\r\nI managed to reproduce the above error with the pypi version begin out of date. \r\nThe version on `https:\/\/storage.googleapis.com\/public_research_data\/bigbench\/bigbench-0.0.1.tar.gz` should be up to date, but it was my understanding that there was some issue with the pypi upload, so I don't even understand why there is a version [on pypi from April 1](https:\/\/pypi.org\/project\/bigbench\/0.0.1\/). Perhaps @ethansdyer, who's handling the pypi upload, knows the answer to that?\r\n\r\n**OOM error**:\r\nBut, I'm unable to reproduce the OOM error in a google colab with GPU enabled.\r\nThis is what I ran:\r\n```\r\n!pip install bigbench@https:\/\/storage.googleapis.com\/public_research_data\/bigbench\/bigbench-0.0.1.tar.gz\r\n!pip install datasets\r\n\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"bigbench\",\"swedish_to_german_proverbs\")\r\n``` \r\nThe `swedish_to_german_proverbs`task is only 72 examples, so I don't understand what could be causing the OOM error. Loading the task has no effect on the RAM for me. @cceyda Can you confirm that this does not occur in a [colab](https:\/\/colab.research.google.com\/)?\r\nIf the GPU is somehow causing issues on your system, disabling the GPU from TF might be an option too\r\n```\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n","Solved.\r\nYes it works on colab, and somehow magically on my machine too now. hmm not sure what was wrong before I had used a fresh venv both times with just the dataloading code, and tried multiple times. (maybe just a wrong tensorflow version got mixed up somehow) The tensorflow call seems to come from the bigbench side anyway.\r\n\r\nabout bigbench pypi version update, I opened an issue over there https:\/\/github.com\/google\/BIG-bench\/issues\/846\r\n\r\nanyway closing this now. If anyone else has the same problem can re-open."],"created_at":1655097846000,"updated_at":1655235924000,"closed_at":1655235923000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nLoading bigbech\r\n```py\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"bigbench\",\"swedish_to_german_proverbs\")\r\n```\r\ntries to use gpu and fails with OOM with the following error\r\n\r\n```\r\nDownloading and preparing dataset bigbench\/swedish_to_german_proverbs (download: Unknown size, generated: 68.92 KiB, post-processed: Unknown size, total: 68.92 KiB) to \/home\/ceyda\/.cache\/huggingface\/datasets\/bigbench\/swedish_to_german_proverbs\/1.0.0\/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0...\r\nGenerating default split: 0%| | 0\/72 [00:00\", line 1006, in _gcd_import\r\n File \"\", line 983, in _find_and_load\r\n File \"\", line 967, in _find_and_load_unlocked\r\n File \"\", line 677, in _load_unlocked\r\n File \"\", line 728, in exec_module\r\n File \"\", line 219, in _call_with_frames_removed\r\n File \"\/home\/ceyda\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/bigbench\/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0\/bigbench.py\", line 118, in \r\n class Bigbench(datasets.GeneratorBasedBuilder):\r\n File \"\/home\/ceyda\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/bigbench\/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0\/bigbench.py\", line 127, in Bigbench\r\n BigBenchConfig(name=name, version=datasets.Version(\"1.0.0\")) for name in bb_utils.get_all_json_task_names()\r\nAttributeError: module 'bigbench.api.util' has no attribute 'get_all_json_task_names'\r\n```\r\n\r\n## Steps to avoid the bug\r\nNot ideal but can solve with (since I don't really use tensorflow elsewhere)\r\n`pip uninstall tensorflow` \r\n`pip install tensorflow-cpu`\r\n\r\n\r\n## Environment info\r\n- datasets @ master\r\n- Python version: 3.7\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4480\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4480\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4479","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4479\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4479\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4479\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4479","id":1268558237,"node_id":"PR_kwDODunzps45hHtZ","number":4479,"title":"Include entity positions as feature in ReCoRD","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks for the reply @lhoestq !\r\n\r\nI have sucessed on `datasets-cli test .\/datasets\/super_glue --name record --save_infos`,\r\nBut as you can see, the check ran into `FAILED tests\/test_dataset_cards.py::test_changed_dataset_card[super_glue] - V...`.\r\nHow can we solve it?","That would be neat! Let me implement it."],"created_at":1655034988000,"updated_at":1660951382000,"closed_at":1660915428000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"https:\/\/huggingface.co\/datasets\/super_glue\/viewer\/record\/validation\r\n\r\nTLDR: We need to record entity positions, which are included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD.\r\n\r\nCurrently, the loading script ignores the entity positions (\"entity_start\", \"entity_end\") and only records entity text. This might be because the training method of the official baseline is to make n training instance from a datapoint by replacing \\\"\\@ placeholder\\\" in query with each entity individually.\r\n\r\nBut it increases the already heavy computation by multiple folds. So DeBERTa uses a method that take entity embeddings by their positions in the passage, and thus makes one training instance from one data point. It is way more efficient and proved effective for the ReCoRD task.\r\n\r\nCan anybody help me with the dataset card rendering error? Maybe @lhoestq ?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4479\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4479\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4479","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4479","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4479.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4479.patch","merged_at":1660915428000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4478","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4478\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4478\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4478\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4478","id":1268358213,"node_id":"I_kwDODunzps5LmZxF","number":4478,"title":"Dataset slow during model training","user":{"login":"lehrig","id":9555494,"node_id":"MDQ6VXNlcjk1NTU0OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9555494?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lehrig","html_url":"https:\/\/github.com\/lehrig","followers_url":"https:\/\/api.github.com\/users\/lehrig\/followers","following_url":"https:\/\/api.github.com\/users\/lehrig\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lehrig\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lehrig\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lehrig\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lehrig\/orgs","repos_url":"https:\/\/api.github.com\/users\/lehrig\/repos","events_url":"https:\/\/api.github.com\/users\/lehrig\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lehrig\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! cc @Rocketknight1 maybe you know better ?\r\n\r\nI'm not too familiar with `tf.data.experimental.save`. Note that `datasets` uses memory mapping, so depending on your hardware and the disk you are using you can expect performance differences with a dataset loaded in RAM","Hi @lehrig, I suspect what's happening here is that our `to_tf_dataset()` method has some performance issues when streaming samples. This is usually not a problem, but they become apparent when streaming a vision dataset into a very small vision model, which will need a lot of sample throughput to saturate the GPU.\r\n\r\nWhen you save a `tf.data.Dataset` with `tf.data.experimental.save`, all of the samples from the dataset (which are, in this case, batches of images), are saved to disk. When you load this saved dataset, you're effectively bypassing `to_tf_dataset()` entirely, which alleviates this performance bottleneck.\r\n\r\n`to_tf_dataset()` is something we're actively working on overhauling right now - particularly for image datasets, we want to make it possible to access the underlying images with `tf.data` without going through the current layer of indirection with `Arrow`, which should massively improve simplicity and performance. \r\n\r\nHowever, if you just want this to work quickly but without needing your save\/load hack, my advice would be to simply load the dataset into memory if it's small enough to fit. Since all your samples have the same dimensions, you can do this simply with:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ndataset = dataset.with_format(\"numpy\")\r\ndata_in_memory = dataset[:]\r\n```\r\n\r\nThen you can simply do something like:\r\n\r\n```\r\nmodel.fit(data_in_memory[\"pixel_values\"], data_in_memory[\"labels\"])\r\n```","Thanks for the information! \r\n\r\nI have now updated the training code like so:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ntrain_dataset = dataset[\"train\"][:]\r\nvalidation_dataset = dataset[\"dev\"][:]\r\n\r\n...\r\n\r\nmodel.fit(\r\n train_dataset[\"pixel_values\"],\r\n train_dataset[\"label\"],\r\n epochs=epochs,\r\n validation_data=(\r\n validation_dataset[\"pixel_values\"],\r\n validation_dataset[\"label\"]\r\n ),\r\n callbacks=[earlyStopping, mcp_save, reduce_lr_loss]\r\n)\r\n```\r\n\r\n- Creating the in-memory dataset is quite quick\r\n- But: There is now a long wait (~4-5 Minutes) before the training starts (why?)\r\n- And: Training times have improved but the very first epoch leaves me wondering why it takes so long (why?)\r\n\r\n**Epoch Breakdown:**\r\n- Epoch 1\/10\r\n78s 12s\/step - loss: 3.1307 - accuracy: 0.0737 - val_loss: 2.2827 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 2\/10\r\n1s 168ms\/step - loss: 2.3616 - accuracy: 0.2350 - val_loss: 2.2679 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 3\/10\r\n1s 189ms\/step - loss: 2.0221 - accuracy: 0.3180 - val_loss: 2.2670 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 4\/10\r\n0s 67ms\/step - loss: 1.8895 - accuracy: 0.3548 - val_loss: 2.2771 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 5\/10\r\n0s 67ms\/step - loss: 1.7846 - accuracy: 0.3963 - val_loss: 2.2860 - val_accuracy: 0.1455 - lr: 0.0010\r\n- Epoch 6\/10\r\n0s 65ms\/step - loss: 1.5946 - accuracy: 0.4516 - val_loss: 2.2938 - val_accuracy: 0.1636 - lr: 0.0010\r\n- Epoch 7\/10\r\n0s 63ms\/step - loss: 1.4217 - accuracy: 0.5115 - val_loss: 2.2968 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 8\/10\r\n0s 67ms\/step - loss: 1.3089 - accuracy: 0.5438 - val_loss: 2.2842 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 9\/10\r\n1s 184ms\/step - loss: 1.2480 - accuracy: 0.5806 - val_loss: 2.2652 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 10\/10\r\n0s 65ms\/step - loss: 1.2699 - accuracy: 0.5622 - val_loss: 2.2670 - val_accuracy: 0.2000 - lr: 0.0010\r\n\r\n","Regarding the new long ~5 min. wait introduced by the in-memory dataset update: this might be causing it? https:\/\/datascience.stackexchange.com\/questions\/33364\/why-model-fit-generator-in-keras-is-taking-so-much-time-even-before-picking-the\r\n\r\nFor now, my save\/load hack is still more performant, even though having more boiler-plate code :\/ ","That 5 minute wait is quite surprising! I don't have a good explanation for why it's happening, but it can't be an issue with `datasets` or `tf.data` because you're just fitting directly on Numpy arrays at this point. All I can suggest is seeing if you can isolate the issue - for example, does fitting on a smaller dataset containing only 10% of the original data reduce the wait? This might indicate the delay is caused by your data being copied or converted somehow. Alternatively, you could try removing things like callbacks and seeing if you could isolate the issue there."],"created_at":1654976419000,"updated_at":1655208271000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhile migrating towards \ud83e\udd17 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training.\r\n\r\nFirst, I have optimized my dataset following https:\/\/discuss.huggingface.co\/t\/solved-image-dataset-seems-slow-for-larger-image-size\/10960\/6, which actually improved the situation from what I had before but did not completely solve it.\r\n\r\nSecond, I saved and loaded my dataset using `tf.data.experimental.save` and `tf.data.experimental.load` before training (for which I would have expected no performance change). However, I ended up with the performance I had before tinkering with \ud83e\udd17 Datasets.\r\n\r\nAny idea what's the reason for this and how to speed-up training with \ud83e\udd17 Datasets?\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\n\r\nfrom datasets import load_dataset\r\nimport os\r\n\r\ndataset_dir = \".\/dataset\"\r\nprep_dataset_dir = \".\/prepdataset\"\r\nmodel_dir = \".\/model\"\r\n\r\n# Load Data\r\ndataset = load_dataset(\"Lehrig\/Monkey-Species-Collection\", \"downsized\")\r\ndef read_image_file(example):\r\n with open(example[\"image\"].filename, \"rb\") as f:\r\n example[\"image\"] = {\"bytes\": f.read()}\r\n return example\r\ndataset = dataset.map(read_image_file)\r\ndataset.save_to_disk(dataset_dir)\r\n\r\n# Preprocess\r\nfrom datasets import (\r\n Array3D,\r\n DatasetDict,\r\n Features,\r\n load_from_disk,\r\n Sequence,\r\n Value\r\n)\r\nimport numpy as np\r\nfrom transformers import ImageFeatureExtractionMixin\r\n\r\ndataset = load_from_disk(dataset_dir)\r\n\r\nnum_classes = dataset[\"train\"].features[\"label\"].num_classes\r\none_hot_matrix = np.eye(num_classes)\r\nfeature_extractor = ImageFeatureExtractionMixin()\r\n\r\ndef to_pixels(image):\r\n image = feature_extractor.resize(image, size=size)\r\n image = feature_extractor.to_numpy_array(image, channel_first=False)\r\n image = image \/ 255.0\r\n return image\r\n\r\ndef process(examples):\r\n examples[\"pixel_values\"] = [\r\n to_pixels(image) for image in examples[\"image\"]\r\n ]\r\n examples[\"label\"] = [\r\n one_hot_matrix[label] for label in examples[\"label\"]\r\n ]\r\n return examples\r\n\r\nfeatures = Features({\r\n \"pixel_values\": Array3D(dtype=\"float32\", shape=(size, size, 3)),\r\n \"label\": Sequence(feature=Value(dtype=\"int32\"), length=num_classes)\r\n})\r\n\r\nprep_dataset = dataset.map(\r\n process,\r\n remove_columns=[\"image\"],\r\n batched=True,\r\n batch_size=batch_size,\r\n num_proc=2,\r\n features=features,\r\n)\r\n\r\nprep_dataset = prep_dataset.with_format(\"numpy\")\r\n\r\n# Split\r\ntrain_dev_dataset = prep_dataset['test'].train_test_split(\r\n test_size=test_size,\r\n shuffle=True,\r\n seed=seed\r\n)\r\n\r\ntrain_dev_test_dataset = DatasetDict({\r\n 'train': train_dev_dataset['train'],\r\n 'dev': train_dev_dataset['test'],\r\n 'test': prep_dataset['test'],\r\n})\r\n\r\ntrain_dev_test_dataset.save_to_disk(prep_dataset_dir)\r\n\r\n# Train Model\r\nimport datetime\r\nimport tensorflow as tf\r\nfrom tensorflow.keras import Sequential\r\nfrom tensorflow.keras.applications import InceptionV3\r\nfrom tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization\r\nfrom tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping\r\nfrom transformers import DefaultDataCollator\r\n\r\ndataset = load_from_disk(prep_data_dir)\r\n\r\ndata_collator = DefaultDataCollator(return_tensors=\"tf\")\r\n\r\ntrain_dataset = dataset[\"train\"].to_tf_dataset(\r\n columns=['pixel_values'],\r\n label_cols=['label'],\r\n shuffle=True,\r\n batch_size=batch_size,\r\n collate_fn=data_collator\r\n)\r\n\r\nvalidation_dataset = dataset[\"dev\"].to_tf_dataset(\r\n columns=['pixel_values'],\r\n label_cols=['label'],\r\n shuffle=False,\r\n batch_size=batch_size,\r\n collate_fn=data_collator\r\n)\r\n\r\nprint(f'{datetime.datetime.now()} - Saving Data')\r\ntf.data.experimental.save(train_dataset, model_dir+\"\/train\")\r\ntf.data.experimental.save(validation_dataset, model_dir+\"\/val\")\r\n\r\nprint(f'{datetime.datetime.now()} - Loading Data')\r\ntrain_dataset = tf.data.experimental.load(model_dir+\"\/train\")\r\nvalidation_dataset = tf.data.experimental.load(model_dir+\"\/val\")\r\n\r\nshape = np.shape(dataset[\"train\"][0][\"pixel_values\"])\r\nbackbone = InceptionV3(\r\n include_top=False,\r\n weights='imagenet',\r\n input_shape=shape\r\n)\r\n\r\nfor layer in backbone.layers:\r\n layer.trainable = False\r\n\r\nmodel = Sequential()\r\nmodel.add(backbone)\r\nmodel.add(GlobalAveragePooling2D())\r\nmodel.add(Dense(128, activation='relu'))\r\nmodel.add(BatchNormalization())\r\nmodel.add(Dropout(0.3))\r\nmodel.add(Dense(64, activation='relu'))\r\nmodel.add(BatchNormalization())\r\nmodel.add(Dropout(0.3))\r\nmodel.add(Dense(10, activation='softmax'))\r\n\r\nmodel.compile(\r\n optimizer='adam',\r\n loss='categorical_crossentropy',\r\n metrics=['accuracy']\r\n)\r\n\r\nprint(model.summary())\r\n\r\nearlyStopping = EarlyStopping(\r\n monitor='val_loss',\r\n patience=10,\r\n verbose=0,\r\n mode='min'\r\n)\r\n\r\nmcp_save = ModelCheckpoint(\r\n f'{model_dir}\/best_model.hdf5',\r\n save_best_only=True,\r\n monitor='val_loss',\r\n mode='min'\r\n)\r\n\r\nreduce_lr_loss = ReduceLROnPlateau(\r\n monitor='val_loss',\r\n factor=0.1,\r\n patience=7,\r\n verbose=1,\r\n min_delta=0.0001,\r\n mode='min'\r\n)\r\n\r\nhist = model.fit(\r\n train_dataset,\r\n epochs=epochs,\r\n validation_data=validation_dataset,\r\n callbacks=[earlyStopping, mcp_save, reduce_lr_loss]\r\n)\r\n```\r\n\r\n## Expected results\r\nSame performance when training without my \"save\/load hack\" or a good explanation\/recommendation about the issue.\r\n\r\n## Actual results\r\nPerformance slower without my \"save\/load hack\".\r\n\r\n**Epoch Breakdown (without my \"save\/load hack\"):**\r\n- Epoch 1\/10\r\n41s 2s\/step - loss: 1.6302 - accuracy: 0.5048 - val_loss: 1.4713 - val_accuracy: 0.3273 - lr: 0.0010\r\n- Epoch 2\/10\r\n32s 2s\/step - loss: 0.5357 - accuracy: 0.8510 - val_loss: 1.0447 - val_accuracy: 0.5818 - lr: 0.0010\r\n- Epoch 3\/10\r\n36s 3s\/step - loss: 0.3547 - accuracy: 0.9231 - val_loss: 0.6245 - val_accuracy: 0.7091 - lr: 0.0010\r\n- Epoch 4\/10\r\n36s 3s\/step - loss: 0.2721 - accuracy: 0.9231 - val_loss: 0.3395 - val_accuracy: 0.9091 - lr: 0.0010\r\n- Epoch 5\/10\r\n32s 2s\/step - loss: 0.1676 - accuracy: 0.9856 - val_loss: 0.2187 - val_accuracy: 0.9636 - lr: 0.0010\r\n- Epoch 6\/10\r\n42s 3s\/step - loss: 0.2066 - accuracy: 0.9615 - val_loss: 0.1635 - val_accuracy: 0.9636 - lr: 0.0010\r\n- Epoch 7\/10\r\n32s 2s\/step - loss: 0.1814 - accuracy: 0.9423 - val_loss: 0.1418 - val_accuracy: 0.9636 - lr: 0.0010\r\n- Epoch 8\/10\r\n32s 2s\/step - loss: 0.1301 - accuracy: 0.9856 - val_loss: 0.1388 - val_accuracy: 0.9818 - lr: 0.0010\r\n- Epoch 9\/10\r\nloss: 0.1102 - accuracy: 0.9856 - val_loss: 0.1185 - val_accuracy: 0.9818 - lr: 0.0010\r\n- Epoch 10\/10\r\n32s 2s\/step - loss: 0.1013 - accuracy: 0.9808 - val_loss: 0.0978 - val_accuracy: 0.9818 - lr: 0.0010\r\n\r\n\r\n\r\n**Epoch Breakdown (with my \"save\/load hack\"):**\r\n- Epoch 1\/10\r\n13s 625ms\/step - loss: 3.0478 - accuracy: 0.1146 - val_loss: 2.3061 - val_accuracy: 0.0727 - lr: 0.0010\r\n- Epoch 2\/10\r\n0s 80ms\/step - loss: 2.3105 - accuracy: 0.2656 - val_loss: 2.3085 - val_accuracy: 0.0909 - lr: 0.0010\r\n- Epoch 3\/10\r\n0s 77ms\/step - loss: 1.8608 - accuracy: 0.3542 - val_loss: 2.3130 - val_accuracy: 0.0909 - lr: 0.0010\r\n- Epoch 4\/10\r\n1s 98ms\/step - loss: 1.8677 - accuracy: 0.3750 - val_loss: 2.3157 - val_accuracy: 0.0909 - lr: 0.0010\r\n- Epoch 5\/10\r\n1s 204ms\/step - loss: 1.5561 - accuracy: 0.4583 - val_loss: 2.3049 - val_accuracy: 0.0909 - lr: 0.0010\r\n- Epoch 6\/10\r\n1s 210ms\/step - loss: 1.4657 - accuracy: 0.4896 - val_loss: 2.2944 - val_accuracy: 0.0909 - lr: 0.0010\r\n- Epoch 7\/10\r\n1s 205ms\/step - loss: 1.4018 - accuracy: 0.5312 - val_loss: 2.2917 - val_accuracy: 0.0909 - lr: 0.0010\r\n- Epoch 8\/10\r\n1s 207ms\/step - loss: 1.2370 - accuracy: 0.5729 - val_loss: 2.2814 - val_accuracy: 0.0909 - lr: 0.0010\r\n- Epoch 9\/10\r\n1s 214ms\/step - loss: 1.1190 - accuracy: 0.6250 - val_loss: 2.2733 - val_accuracy: 0.0909 - lr: 0.0010\r\n- Epoch 10\/10\r\n1s 207ms\/step - loss: 1.1484 - accuracy: 0.6302 - val_loss: 2.2624 - val_accuracy: 0.0909 - lr: 0.0010\r\n\r\n## Environment info\r\n- `datasets` version: 2.2.2\r\n- Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17\r\n- Python version: 3.8.13\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2\r\n- TensorFlow: 2.8.0\r\n- GPU (used during training): Tesla V100-SXM2-32GB\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4478\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4478\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4477","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4477\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4477\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4477\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4477","id":1268308986,"node_id":"I_kwDODunzps5LmNv6","number":4477,"title":"Dataset Viewer issue for fgrezes\/WIESP2022-NER","user":{"login":"AshTayade","id":42551754,"node_id":"MDQ6VXNlcjQyNTUxNzU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42551754?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AshTayade","html_url":"https:\/\/github.com\/AshTayade","followers_url":"https:\/\/api.github.com\/users\/AshTayade\/followers","following_url":"https:\/\/api.github.com\/users\/AshTayade\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AshTayade\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AshTayade\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AshTayade\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AshTayade\/orgs","repos_url":"https:\/\/api.github.com\/users\/AshTayade\/repos","events_url":"https:\/\/api.github.com\/users\/AshTayade\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AshTayade\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["https:\/\/huggingface.co\/datasets\/fgrezes\/WIESP2022-NER\r\n\r\nThe error:\r\n\r\n```\r\nMessage: Couldn't find a dataset script at \/src\/services\/worker\/fgrezes\/WIESP2022-NER\/WIESP2022-NER.py or any data file in the same directory. Couldn't find 'fgrezes\/WIESP2022-NER' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**test*', '**eval*'] in dataset repository fgrezes\/WIESP2022-NER with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nI understand the issue is not related to the dataset viewer in itself, but with the autodetection of the data files without a loading script in the datasets library. cc @lhoestq @albertvillanova @mariosasko ","Apparently it finds `scoring-scripts\/compute_seqeval.py` which matches `**eval*`, a regex that detects a test split. We should probably improve the regex because it's not supposed to catch this kind of files. It must also only check for files with supported extensions: txt, csv, png etc."],"created_at":1654962557000,"updated_at":1658149653000,"closed_at":1658149653000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4477\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4477\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4476","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4476\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4476\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4476\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4476","id":1267987499,"node_id":"I_kwDODunzps5Lk_Qr","number":4476,"title":"`to_pandas` doesn't take into account format.","user":{"login":"Dref360","id":8976546,"node_id":"MDQ6VXNlcjg5NzY1NDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8976546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Dref360","html_url":"https:\/\/github.com\/Dref360","followers_url":"https:\/\/api.github.com\/users\/Dref360\/followers","following_url":"https:\/\/api.github.com\/users\/Dref360\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Dref360\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Dref360\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Dref360\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Dref360\/orgs","repos_url":"https:\/\/api.github.com\/users\/Dref360\/repos","events_url":"https:\/\/api.github.com\/users\/Dref360\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Dref360\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for opening a discussion :)\r\n\r\nNote that you can use `.remove_columns(...)` to keep only the ones you're interested in before calling `.to_pandas()`","Yes I can do that thank you!\r\n\r\nDo you think that conceptually my example should work? If not, I'm happy to close this issue. \r\n\r\nIf yes, I can start working on it.","Hi! Instead of `with_format(columns=['a', 'b']).to_pandas()`, use `with_format(\"pandas\", columns=[\"a\", \"b\"])` for easy conversion of the parts of the dataset to pandas via indexing\/slicing.\r\n\r\nThe full code:\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})\r\npandas_df = ds.with_format(\"pandas\", columns=['a', 'b'])[:]\r\n```","Ahhhh Thank you!\r\n\r\nclosing then :)"],"created_at":1654892731000,"updated_at":1655314901000,"closed_at":1655314901000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nI have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`.\r\n\r\n**Describe the solution you'd like**\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})\r\npandas_df = ds.with_format(columns=['a', 'b']).to_pandas()\r\n\r\n# I would expect `pandas_df` to only include a,b as column.\r\n```\r\n\r\n\r\n**Describe alternatives you've considered**\r\nI could remove all columns that I don't want? But I don't know all of them in advance. \r\n\r\n**Additional context**\r\nI can probably make a PR with some pointers.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4476\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4476\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4475","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4475\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4475\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4475\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4475","id":1267798451,"node_id":"PR_kwDODunzps45eufw","number":4475,"title":"Improve error message for missing pacakges from inside dataset script","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I opened a PR before I noticed yours ^^' You can find it here: https:\/\/github.com\/huggingface\/datasets\/pull\/4484\r\n\r\nThe only comment I have regarding your message is that it possibly shows several `pip install` commands, whereas one can run one single `pip install` command with the list of missing dependencies, which is maybe simpler.\r\n\r\nLet me know which one your prefer","Closing in favor of #4484. "],"created_at":1654880376000,"updated_at":1655126787000,"closed_at":1655126203000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Improve the error message for missing packages from inside a dataset script:\r\n\r\nWith this change, the error message for missing packages for `bigbench` looks as follows:\r\n```\r\nImportError: To be able to use bigbench, you need to install the following dependencies:\r\n - 'bigbench' using 'pip install \"bigbench @ https:\/\/storage.googleapis.com\/public_research_data\/bigbench\/bigbench-0.0.1.tar.gz\"'\r\n```\r\n\r\nAnd this is how it looked before:\r\n```\r\nImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install \"bigbench @ https:\/\/storage.googleapis.com\/public_research_data\/bigbench\/bigbench-0.0.1.tar.gz\" bigbench bigbench bigbench' for instance'\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4475\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4475\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4475","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4475","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4475.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4475.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4474","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4474\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4474\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4474\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4474","id":1267767541,"node_id":"PR_kwDODunzps45en98","number":4474,"title":"[Docs] How to use with PyTorch page","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654878349000,"updated_at":1655217632000,"closed_at":1655215473000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :)\r\n\r\ncc @Rocketknight1 we can try to align both documentations contents now I think\r\n\r\ncc @stevhliu let me know what you think !","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4474\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":1,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4474\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4474","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4474","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4474.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4474.patch","merged_at":1655215472000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4473","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4473\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4473\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4473\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4473","id":1267555994,"node_id":"PR_kwDODunzps45d5-R","number":4473,"title":"Add SST-2 dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","on the hub this dataset is referenced as `sst-2` not `sst2` \u2013 is there a canonical orthography? If not, could we name it `sst-2`?","@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name contains a hyphen, we usually:\r\n- either suppress it: CoNLL-2000 (`conll2000`), CORD-19 (`cord19`)\r\n- or replace it with underscore: CC-News (`cc_news`), SQuAD-es (`squad_es`)\r\n\r\nThere are some exceptions though... (I wonder why)\r\n\r\nI think, the reason is there was a 1-to-1 relation with the corresponding Python module name.\r\n\r\nI personally find confusing not having a rule and using both hyphens and underscores indistinctly: you never know which is the right orthography.\r\n\r\nWhichever the decision we make, I would prefer to be applied consistently.\r\n\r\nAlso note that we already implemented this dataset as part of GLUE: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/glue\/glue.py#L163\r\n- dataset name: `glue`\r\n- config name: `sst2`\r\n\r\nOn the other hand, let's see how other libraries name it:\r\n- torchtext: `SST2` https:\/\/pytorch.org\/text\/stable\/datasets.html#sst2\r\n- OpenAI CLIP: `rendered-sst2` https:\/\/github.com\/openai\/CLIP\/blob\/main\/data\/rendered-sst2.md\r\n- Kaggle: `SST2` https:\/\/www.kaggle.com\/datasets\/atulanandjha\/stanford-sentiment-treebank-v2-sst2\/version\/22\r\n- TensorFlow Datasets: `glue\/sst2` https:\/\/www.tensorflow.org\/datasets\/catalog\/glue#gluesst2","Ok, another option is to open PRs against the models in https:\/\/huggingface.co\/models?datasets=sst-2 to change their dataset reference to `sst2`\r\n\r\n(BTW some models refer to `sst2` already \u2013 but they're less popular: https:\/\/huggingface.co\/models?datasets=sst2)","OK, I'm taking care of the subsequent PRs on models to align with this dataset name."],"created_at":1654868246000,"updated_at":1655129494000,"closed_at":1655128869000,"author_association":"MEMBER","active_lock_reason":null,"body":"Add SST-2 dataset.\r\n\r\nCurrently it is part of GLUE benchmark.\r\n\r\nThis PR adds it as a standalone dataset.\r\n\r\nCC: @julien-c ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4473\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4473\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4473","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4473","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4473.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4473.patch","merged_at":1655128869000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4472","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4472\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4472\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4472\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4472","id":1267488523,"node_id":"PR_kwDODunzps45drcb","number":4472,"title":"Fix 401 error for unauthticated requests to non-existing repos","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654864691000,"updated_at":1654866311000,"closed_at":1654865757000,"author_association":"MEMBER","active_lock_reason":null,"body":"The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos.\r\nThis PR add support for the 401 error and fixes the CI fails on `master`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4472\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4472\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4472","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4472","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4472.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4472.patch","merged_at":1654865756000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4471","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4471\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4471\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4471\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4471","id":1267475268,"node_id":"I_kwDODunzps5LjCNE","number":4471,"title":"CI error with repo lhoestq\/_dummy","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["fixed by https:\/\/github.com\/huggingface\/datasets\/pull\/4472"],"created_at":1654863966000,"updated_at":1654867493000,"closed_at":1654867493000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nCI is failing because of repo \"lhoestq\/_dummy\". See: https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/12461\/workflows\/1b040b45-9578-4ab9-8c44-c643c4eb8691\/jobs\/74269\r\n```\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https:\/\/huggingface.co\/api\/datasets\/lhoestq\/_dummy?full=true\r\n```\r\n\r\nThe repo seems to no longer exist: https:\/\/huggingface.co\/api\/datasets\/lhoestq\/_dummy\r\n```\r\nerror: \"Repository not found\"\r\n```\r\n\r\nCC: @lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4471\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4471\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4470","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4470\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4470\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4470\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4470","id":1267470051,"node_id":"PR_kwDODunzps45dnYw","number":4470,"title":"Reorder returned validation\/test splits in script template","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654863673000,"updated_at":1654884250000,"closed_at":1654883690000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4470\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4470\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4470","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4470","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4470.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4470.patch","merged_at":1654883690000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4469","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4469\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4469\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4469\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4469","id":1267213849,"node_id":"PR_kwDODunzps45cweQ","number":4469,"title":"Replace data URLs in wider_face dataset once hosted on the Hub","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654848805000,"updated_at":1654879328000,"closed_at":1654878766000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub.\r\n\r\nThey also informed us that their dataset is licensed under CC BY-NC-ND.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4469\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4469\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4469","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4469","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4469.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4469.patch","merged_at":1654878766000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4468","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4468\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4468\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4468\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4468","id":1266715742,"node_id":"PR_kwDODunzps45bERK","number":4468,"title":"Generalize tutorials for audio and vision","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654812044000,"updated_at":1655223722000,"closed_at":1655223120000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic\/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset.\r\n\r\nOther changes include:\r\n\r\n- Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder.\r\n- Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library.\r\n- Renamed some tutorials in the TOC to be more clear and specific.\r\n- Added more text to nudge users towards joining the community and asking questions on the forums.\r\n- If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4468\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4468\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4468","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4468","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4468.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4468.patch","merged_at":1655223120000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4467","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4467\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4467\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4467\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4467","id":1266218358,"node_id":"I_kwDODunzps5LePV2","number":4467,"title":"Transcript string 'null' converted to [None] by load_dataset()","user":{"login":"mbarnig","id":1360633,"node_id":"MDQ6VXNlcjEzNjA2MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1360633?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mbarnig","html_url":"https:\/\/github.com\/mbarnig","followers_url":"https:\/\/api.github.com\/users\/mbarnig\/followers","following_url":"https:\/\/api.github.com\/users\/mbarnig\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mbarnig\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mbarnig\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mbarnig\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mbarnig\/orgs","repos_url":"https:\/\/api.github.com\/users\/mbarnig\/repos","events_url":"https:\/\/api.github.com\/users\/mbarnig\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mbarnig\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @mbarnig, thanks for reporting.\r\n\r\nPlease note that is an expected behavior by `pandas` (we use the `pandas` library to parse CSV files): https:\/\/pandas.pydata.org\/docs\/reference\/api\/pandas.read_csv.html\r\n```\r\nBy default the following values are interpreted as NaN: \r\n\u2018\u2019, \u2018#N\/A\u2019, \u2018#N\/A N\/A\u2019, \u2018#NA\u2019, \u2018-1.#IND\u2019, \u2018-1.#QNAN\u2019, \u2018-NaN\u2019, \u2018-nan\u2019, \u20181.#IND\u2019, \u20181.#QNAN\u2019, \u2018\u2019, \u2018N\/A\u2019, \u2018NA\u2019, \u2018NULL\u2019, \u2018NaN\u2019, \u2018n\/a\u2019, \u2018nan\u2019, \u2018null\u2019.\r\n```\r\n(see \"null\" in the last position in the above list).\r\n\r\nIn order to prevent `pandas` from performing that automatic conversion from the string \"null\" to a NaN value, you should pass the `pandas` parameter `keep_default_na=False`:\r\n```python\r\nIn [2]: dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}, keep_default_na=False)\r\nIn [3]: dataset[\"train\"][0][\"transcript\"]\r\nOut[3]: 'null'\r\n```","Thanks for the quick answer."],"created_at":1654784760000,"updated_at":1654797337000,"closed_at":1654792142000,"author_association":"NONE","active_lock_reason":null,"body":"## Issue\r\nI am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script \r\n\r\n`ds_train1 = mydataset.map(prepare_dataset)` \r\n\r\nthe following error was issued:\r\n\r\n``` \r\nValueError Traceback (most recent call last)\r\n in ()\r\n----> 1 ds_train = mydataset_train.map(prepare_dataset)\r\n\r\n11 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/transformers\/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)\r\n 2450 if not _is_valid_text_input(text):\r\n 2451 raise ValueError(\r\n-> 2452 \"text input must of type str (single example), List[str] (batch or single pretokenized example) \"\r\n 2453 \"or List[List[str]] (batch of pretokenized examples).\"\r\n 2454 )\r\n\r\nValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples).\r\n```\r\nDebugging this problem was not easy, all transcriptions in the dataset are correct strings. Finally I discovered that the transcription string 'null' is interpreted as [None] by the `load_dataset()` script. By deleting this row in the dataset the training worked fine.\r\n\r\n## Expected result: \r\ntranscription 'null' interpreted as 'str' instead of 'None'.\r\n\r\n## Reproduction\r\nHere is the code to reproduce the error with a one-row-dataset.\r\n\r\n``` \r\nwith open(\"null-test.csv\") as f:\r\n reader = csv.reader(f)\r\n for row in reader:\r\n print(row)\r\n``` \r\n\r\n['wav_filename', 'wav_filesize', 'transcript']\r\n['wavs\/female\/NULL1.wav', '17530', 'null']\r\n\r\n```\r\ndataset = load_dataset('csv', data_files={'train': 'null-test.csv'}) \r\n``` \r\n\r\nUsing custom data configuration default-81ac0c0e27af3514\r\nDownloading and preparing dataset csv\/default to \/root\/.cache\/huggingface\/datasets\/csv\/default-81ac0c0e27af3514\/0.0.0\/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519...\r\nDownloading data files: 100%\r\n1\/1 [00:00<00:00, 29.55it\/s]\r\nExtracting data files: 100%\r\n1\/1 [00:00<00:00, 23.66it\/s]\r\nDataset csv downloaded and prepared to \/root\/.cache\/huggingface\/datasets\/csv\/default-81ac0c0e27af3514\/0.0.0\/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.\r\n100%\r\n1\/1 [00:00<00:00, 25.84it\/s]\r\n\r\n``` \r\nprint(dataset['train']['transcript'])\r\n``` \r\n\r\n[None]\r\n\r\n## Environment info\r\n```\r\n!pip install datasets==2.2.2\r\n!pip install transformers==4.19.2\r\n``` ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4467\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4467\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4466","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4466\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4466\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4466\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4466","id":1266159920,"node_id":"PR_kwDODunzps45ZLsd","number":4466,"title":"Optimize contiguous shard and select","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I thought of just mentioning the benefits I got. Here's the code that @lhoestq provided:\r\n\r\n```py\r\nimport os\r\nfrom datasets import load_dataset\r\nfrom tqdm.auto import tqdm\r\n\r\nds = load_dataset(\"squad\", split=\"train\")\r\nos.makedirs(\"tmp\")\r\n\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n size = len(ds) \/\/ num_shards\r\n shard = Dataset(ds.data.slice(size * index, size), fingerprint=f\"{ds._fingerprint}_{index}\")\r\n shard.to_json(f\"tmp\/data_{index}.jsonl\")\r\n```\r\n\r\nIt is 1.64s. Previously the code was:\r\n\r\n```py\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n shard = ds.shard(num_shards=num_shards, index=index, contiguous=True)\r\n shard.to_json(f\"tmp\/data_{index}.jsonl\")\r\n # upload_to_gcs(f\"tmp\/data_{index}.jsonl\")\r\n```\r\n\r\nIt was 2min31s. \r\n\r\nI ran it on my humble MacBook Pro:\r\n\r\n\"image\"\r\n","I addressed your comments @albertvillanova , let me know what you think :)"],"created_at":1654782339000,"updated_at":1655222670000,"closed_at":1655222085000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular:\r\n- the shard\/select operation will be much faster\r\n- reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping\r\n\r\nSince `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations\r\n\r\nHere is an example of speed-up:\r\n```python\r\n>>> import io\r\n>>> import numpy as np\r\n>>> from datasets import Dataset\r\n>>> ds = Dataset.from_dict({\"a\": np.random.rand(10_000_000)})\r\n>>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))`\r\n>>> buf = io.BytesIO()\r\n>>> %time dd.to_json(buf)\r\nCreating json from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 100\/100 [00:00<00:00, 376.17ba\/s]\r\nCPU times: user 258 ms, sys: 9.06 ms, total: 267 ms\r\nWall time: 266 ms\r\n```\r\nwhile previously it was\r\n```python\r\nCreating json from Arrow format: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 100\/100 [00:03<00:00, 29.41ba\/s]\r\nCPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s\r\nWall time: 3.4 s\r\n```\r\n\r\nIn this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON.\r\n\r\n## Implementation details\r\n\r\nI mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities:\r\n- if the indices is of type `range`, it checks that start >= 0 and step = 1\r\n- otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping.\r\n\r\nHaving to iterate over the indices doesn't cause performance issues IMO because:\r\n- either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping\r\n- or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4466\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4466\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4466","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4466","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4466.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4466.patch","merged_at":1655222085000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4465","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4465\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4465\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4465\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4465","id":1265754479,"node_id":"PR_kwDODunzps45X0XY","number":4465,"title":"Fix bigbench config names","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654761979000,"updated_at":1654785516000,"closed_at":1654784959000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix https:\/\/github.com\/huggingface\/datasets\/issues\/4462 in the case of bigbench","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4465\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4465\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4465","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4465","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4465.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4465.patch","merged_at":1654784958000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4464","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4464\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4464\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4464\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4464","id":1265682931,"node_id":"PR_kwDODunzps45XlWW","number":4464,"title":"Extend support for streaming datasets that use xml.dom.minidom.parse","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654757905000,"updated_at":1654764204000,"closed_at":1654763656000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function.\r\n\r\nThis PR adds support for streaming datasets like \"Yaxin\/SemEval2015\".\r\n\r\nFix #4453.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4464\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4464\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4464","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4464","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4464.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4464.patch","merged_at":1654763655000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4463","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4463\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4463\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4463\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4463","id":1265093211,"node_id":"PR_kwDODunzps45Vnzu","number":4463,"title":"Use config_id to check split sizes instead of config name","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","closing in favor of https:\/\/github.com\/huggingface\/datasets\/pull\/4465"],"created_at":1654710324000,"updated_at":1654762543000,"closed_at":1654761997000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix https:\/\/github.com\/huggingface\/datasets\/issues\/4462","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4463\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4463\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4463","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4463","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4463.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4463.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4462","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4462\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4462\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4462\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4462","id":1265079347,"node_id":"I_kwDODunzps5LZ5Qz","number":4462,"title":"BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Why not adding `max_examples` as part of the config name?","Yup it can also work, and maybe it's simpler this way. Opening a PR to fix bigbench instead of https:\/\/github.com\/huggingface\/datasets\/pull\/4463","Hi @lhoestq,\r\n\r\nThank you for taking a look at this issue, and proposing a solution. \r\nUnfortunately, after trying the fix in #4465 I still see the same issue.\r\n\r\nI think there is some subtlety where the config name gets overwritten somewhere when `BUILDER_CONFIGS`[(link)](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/bigbench\/bigbench.py#L126) is defined. \r\n\r\nIf I print out the `self.config.name` in the current version (with the fix in #4465), I see just the task name, but if I comment out `BUILDER_CONFIGS`, the `num_shots` and `max_examples` gets appended as was meant by #4465.\r\n\r\nI haven't managed to track down where this happens, but I thought you might know? \r\n\r\n(Another comment on your fix: the `name` variable is used to fetch the task from the bigbench API, so modifying it causes an error if it's actually called. This can easily be fixed by having `config_name` variable in addition to the `task_name`)\r\n\r\n\r\n"],"created_at":1654709484000,"updated_at":1657006795000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"As noticed in https:\/\/github.com\/huggingface\/datasets\/pull\/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`.\r\n\r\nThis is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4462\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4462\/timeline","performed_via_github_app":null,"state_reason":"reopened","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4461","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4461\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4461\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4461\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4461","id":1264800451,"node_id":"I_kwDODunzps5LY1LD","number":4461,"title":"AttributeError: module 'datasets' has no attribute 'load_dataset'","user":{"login":"AlexNLP","id":59248970,"node_id":"MDQ6VXNlcjU5MjQ4OTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59248970?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AlexNLP","html_url":"https:\/\/github.com\/AlexNLP","followers_url":"https:\/\/api.github.com\/users\/AlexNLP\/followers","following_url":"https:\/\/api.github.com\/users\/AlexNLP\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AlexNLP\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AlexNLP\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AlexNLP\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AlexNLP\/orgs","repos_url":"https:\/\/api.github.com\/users\/AlexNLP\/repos","events_url":"https:\/\/api.github.com\/users\/AlexNLP\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AlexNLP\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1654696760000,"updated_at":1654699260000,"closed_at":1654699260000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric.\r\n\r\n## Environment info\r\n- `datasets` version: 1.9.0\r\n- Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.6.13\r\n- PyArrow version: 6.0.1\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4461\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4461\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4460","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4460\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4460\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4460\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4460","id":1264644205,"node_id":"PR_kwDODunzps45UHIs","number":4460,"title":"Drop Python 3.6 support","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I've disabled the `test_dummy_dataset_serialize_s3` tests in the Linux CI to avoid the failures (these tests only fail on Windows in 3.6). These failures are unrelated to this PR's changes, and I would like to address this in a new PR.","[This comment](https:\/\/github.com\/pytorch\/audio\/issues\/2363#issuecomment-1179089175) explains the issue with MP3 decoding in `torchaudio` in the latest release (supports Python 3.7+). I fixed CI by pinning `torchaudio` to `<0.12.0`. Another way to fix this issue is by installing `ffmpeg` with conda or using the unofficial GH action. But I don't think it's worth making CI more complex, considering we can wait for the soundfile release, which should bring MP3 decoding, and drop the `torchaudio` dependency then.","Yay for dropping Python 3.6!","I think we can merge in this state. Also, if an env has Python version < 3.7 installed, we raise a warning, so I don't think we even need to create (and pin) an issue to notify the contributors of this change."],"created_at":1654690218000,"updated_at":1658862999000,"closed_at":1658862261000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Remove the fallback imports\/checks in the code needed for Python 3.6 and update the requirements\/CI files. Also, use Python types for the NumPy dtype wherever possible to avoid deprecation warnings in newer NumPy versions.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4460\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4460\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4460","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4460","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4460.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4460.patch","merged_at":1658862261000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4459","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4459\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4459\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4459\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4459","id":1264636481,"node_id":"PR_kwDODunzps45UFc8","number":4459,"title":"Add and fix language tags for udhr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654689822000,"updated_at":1654691784000,"closed_at":1654691233000,"author_association":"MEMBER","active_lock_reason":null,"body":"Related to #4362.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4459\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4459\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4459","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4459","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4459.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4459.patch","merged_at":1654691233000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4457","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4457\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4457\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4457\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4457","id":1263531911,"node_id":"PR_kwDODunzps45QZCU","number":4457,"title":"First draft of the docs for TF + Datasets","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Some links are still missing I think :)","This is probably quite close to being ready, so cc some TF people @gante @amyeroberts @merveenoyan just so they see it! No need for a full review, but if you have any comments or suggestions feel free.","Thanks ! We plan to make a new release later today for `to_tf_dataset` FYI, so I think we can merge it soon and include this documentation in the new release"],"created_at":1654618008000,"updated_at":1655222921000,"closed_at":1655222348000,"author_association":"MEMBER","active_lock_reason":null,"body":"I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4457\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":1,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4457\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4457","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4457","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4457.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4457.patch","merged_at":1655222348000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4456","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4456\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4456\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4456\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4456","id":1263241449,"node_id":"I_kwDODunzps5LS4jp","number":4456,"title":"Workflow for Tabular data","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I use below to load a dataset:\r\n```\r\ndataset = datasets.load_dataset(\"scikit-learn\/auto-mpg\")\r\ndf = pd.DataFrame(dataset[\"train\"])\r\n```\r\nTBH as said, tabular folk split their own dataset, they sometimes have two splits, sometimes three. Maybe somehow avoiding it for tabular datasets might be good for later. (it's just UX improvement) "],"created_at":1654606102000,"updated_at":1656669467000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal.\r\n\r\nFor example for tabular data, it is common to use pandas\/spark\/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model.\r\n\r\nIn `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y.\r\n\r\nRight now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data:\r\n- be able to load the data into X and y\r\n- be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3\/GCS\/HDFS etc.)\r\n- support \"unsplit\" datasets explicitly, instead of putting everything in \"train\" by default\r\n\r\ncc @adrinjalali @merveenoyan feel free to complete\/correct this :)\r\n\r\nFeel free to also share ideas of APIs that would be super intuitive in your opinion !","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4456\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":1,"confused":0,"heart":0,"rocket":0,"eyes":1},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4456\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4455","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4455\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4455\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4455\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4455","id":1263089067,"node_id":"PR_kwDODunzps45O5F9","number":4455,"title":"Update data URLs in fever dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654598454000,"updated_at":1654673094000,"closed_at":1654672577000,"author_association":"MEMBER","active_lock_reason":null,"body":"As stated in their website, data owners updated their URLs on 28\/04\/2022.\r\n\r\nThis PR updates the data URLs.\r\n\r\nFix #4452.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4455\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4455\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4455","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4455","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4455.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4455.patch","merged_at":1654672576000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4454","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4454\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4454\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4454\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4454","id":1262674973,"node_id":"I_kwDODunzps5LQuQd","number":4454,"title":"Dataset Viewer issue for Yaxin\/SemEval2015","user":{"login":"WithYouTo","id":18160852,"node_id":"MDQ6VXNlcjE4MTYwODUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18160852?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/WithYouTo","html_url":"https:\/\/github.com\/WithYouTo","followers_url":"https:\/\/api.github.com\/users\/WithYouTo\/followers","following_url":"https:\/\/api.github.com\/users\/WithYouTo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/WithYouTo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/WithYouTo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/WithYouTo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/WithYouTo\/orgs","repos_url":"https:\/\/api.github.com\/users\/WithYouTo\/repos","events_url":"https:\/\/api.github.com\/users\/WithYouTo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/WithYouTo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"},{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Closing since it's a duplicate of https:\/\/github.com\/huggingface\/datasets\/issues\/4453"],"created_at":1654572706000,"updated_at":1654602791000,"closed_at":1654602791000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\nthe link could not visit\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4454\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4454\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4453","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4453\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4453\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4453\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4453","id":1262674105,"node_id":"I_kwDODunzps5LQuC5","number":4453,"title":"Dataset Viewer issue for Yaxin\/SemEval2015","user":{"login":"WithYouTo","id":18160852,"node_id":"MDQ6VXNlcjE4MTYwODUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18160852?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/WithYouTo","html_url":"https:\/\/github.com\/WithYouTo","followers_url":"https:\/\/api.github.com\/users\/WithYouTo\/followers","following_url":"https:\/\/api.github.com\/users\/WithYouTo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/WithYouTo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/WithYouTo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/WithYouTo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/WithYouTo\/orgs","repos_url":"https:\/\/api.github.com\/users\/WithYouTo\/repos","events_url":"https:\/\/api.github.com\/users\/WithYouTo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/WithYouTo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https:\/\/raw.githubusercontent.com\/YaxinCui\/ABSADataset\/main\/SemEval2015Task12Corrected\/train\/restaurants_train.xml'\r\n```","`xml.dom.minidom.parse` is not supported in streaming mode. I opened a PR here to fix it:\r\nhttps:\/\/huggingface.co\/datasets\/Yaxin\/SemEval2015\/discussions\/1\r\n\r\nPlease review the PR @WithYouTo and let me know if it works !","Additionally, I'm also patching our library, so that we support streaming datasets that use `xml.dom.minidom.parse`."],"created_at":1654572608000,"updated_at":1654763656000,"closed_at":1654763656000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4453\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4453\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4452","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4452\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4452\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4452\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4452","id":1262529654,"node_id":"I_kwDODunzps5LQKx2","number":4452,"title":"Trying to load FEVER dataset results in NonMatchingChecksumError","user":{"login":"santhnm2","id":5347982,"node_id":"MDQ6VXNlcjUzNDc5ODI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5347982?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/santhnm2","html_url":"https:\/\/github.com\/santhnm2","followers_url":"https:\/\/api.github.com\/users\/santhnm2\/followers","following_url":"https:\/\/api.github.com\/users\/santhnm2\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/santhnm2\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/santhnm2\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/santhnm2\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/santhnm2\/orgs","repos_url":"https:\/\/api.github.com\/users\/santhnm2\/repos","events_url":"https:\/\/api.github.com\/users\/santhnm2\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/santhnm2\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @santhnm2. We are fixing it.\r\n\r\nData owners updated their URLs recently. We have to align with them, otherwise you do not download anything (that is why ignore_verifications does not work)."],"created_at":1654557195000,"updated_at":1654672576000,"closed_at":1654672576000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nTrying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`.\r\n\r\nI tried with `download_mode=\"force_redownload\"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('fever', 'v1.0') # Fails with NonMatchingChecksumError\r\ndataset = load_dataset('fever', 'v1.0', download_mode=\"force_redownload\") # Fails with NonMatchingChecksumError\r\ndataset = load_dataset('fever', 'v1.0', ignore_verification=True)` # Fails with JSONDecodeError\r\n```\r\n\r\n## Expected results\r\nI expect this call to return with no error raised.\r\n\r\n## Actual results\r\nWith `ignore_verification=False`:\r\n```\r\n*** datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/s3-eu-west-1.amazonaws.com\/fever.public\/train.jsonl', 'https:\/\/s3-eu-west-1.amazonaws.com\/fever.public\/shared_task_dev.jsonl', 'https:\/\/s3-eu-west-1.amazonaws.com\/fever.public\/shared_task_dev_public.jsonl', 'https:\/\/s3-eu-west-1.amazonaws.com\/fever.public\/shared_task_test.jsonl', 'https:\/\/s3-eu-west-1.amazonaws.com\/fever.public\/paper_dev.jsonl', 'https:\/\/s3-eu-west-1.amazonaws.com\/fever.public\/paper_test.jsonl']\r\n```\r\nWith `ignore_verification=True`:\r\n```\r\n*** json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.2.3.dev0\r\n- Platform: Linux-4.15.0-50-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.13\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4452\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4452\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4451","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4451\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4451\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4451\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4451","id":1262103323,"node_id":"PR_kwDODunzps45LkGc","number":4451,"title":"Use newer version of multi-news with fixes","user":{"login":"JohnGiorgi","id":8917831,"node_id":"MDQ6VXNlcjg5MTc4MzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8917831?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JohnGiorgi","html_url":"https:\/\/github.com\/JohnGiorgi","followers_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/followers","following_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/orgs","repos_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/repos","events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Awesome thanks @mariosasko!"],"created_at":1654534628000,"updated_at":1654623601000,"closed_at":1654622084000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Closes #4430.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4451\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4451\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4451","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4451","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4451.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4451.patch","merged_at":1654622084000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4450","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4450\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4450\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4450\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4450","id":1261878324,"node_id":"PR_kwDODunzps45Kzwh","number":4450,"title":"Update README.md of fquad","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654523561000,"updated_at":1654527109000,"closed_at":1654526583000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4450\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4450\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4450","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4450","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4450.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4450.patch","merged_at":1654526583000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4449","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4449\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4449\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4449\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4449","id":1261262326,"node_id":"I_kwDODunzps5LLVX2","number":4449,"title":"Rj","user":{"login":"Aeckard45","id":87345839,"node_id":"MDQ6VXNlcjg3MzQ1ODM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/87345839?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Aeckard45","html_url":"https:\/\/github.com\/Aeckard45","followers_url":"https:\/\/api.github.com\/users\/Aeckard45\/followers","following_url":"https:\/\/api.github.com\/users\/Aeckard45\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Aeckard45\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Aeckard45\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Aeckard45\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Aeckard45\/orgs","repos_url":"https:\/\/api.github.com\/users\/Aeckard45\/repos","events_url":"https:\/\/api.github.com\/users\/Aeckard45\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Aeckard45\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1654482272000,"updated_at":1654530290000,"closed_at":1654530290000,"author_association":"NONE","active_lock_reason":null,"body":"import android.content.DialogInterface;\nimport android.database.Cursor;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.ArrayAdapter;\nimport android.widget.Button;\nimport android.widget.EditText;\nimport android.widget.Toast;\n\nimport androidx.appcompat.app.AlertDialog;\nimport androidx.appcompat.app.AppCompatActivity;\n\npublic class MainActivity extends AppCompatActivity {\n\n\n private EditText editTextID;\n private EditText editTextName;\n private EditText editTextNum;\n\n private String name;\n private int number;\n private String ID;\n\n private dbHelper db;\n\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n\n db = new dbHelper(this);\n\n editTextID = findViewById(R.id.editText1);\n editTextName = findViewById(R.id.editText2);\n editTextNum = findViewById(R.id.editText3);\n\n Button buttonSave = findViewById(R.id.button);\n Button buttonRead = findViewById(R.id.button2);\n Button buttonUpdate = findViewById(R.id.button3);\n Button buttonDelete = findViewById(R.id.button4);\n Button buttonSearch = findViewById(R.id.button5);\n Button buttonDeleteAll = findViewById(R.id.button6);\n\n buttonSave.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n\n name = editTextName.getText().toString();\n\n\n String num = editTextNum.getText().toString();\n\n if (name.isEmpty() || num.isEmpty()) {\n\n Toast.makeText(MainActivity.this, \"Cannot Submit Empty Fields\", Toast.LENGTH_SHORT).show();\n } else {\n number = Integer.parseInt(num);\n\n\n try {\n \/\/ Insert Data\n db.insertData(name, number);\n\n \/\/ Clear the fields\n editTextID.getText().clear();\n editTextName.getText().clear();\n editTextNum.getText().clear();\n\n } catch (Exception e) {\n e.printStackTrace();\n }\n }\n\n }\n });\n\n buttonRead.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n\n final ArrayAdapter adapter = new ArrayAdapter<>(MainActivity.this, android.R.layout.simple_list_item_1);\n String name;\n String num;\n String id;\n\n try {\n\n Cursor cursor = db.readData();\n if (cursor != null && cursor.getCount() > 0) {\n\n while (cursor.moveToNext()) {\n\n id = cursor.getString(0); \/\/ get data in column index 0\n name = cursor.getString(1); \/\/ get data in column index 1\n num = cursor.getString(2); \/\/ get data in column index 2\n\n \/\/ Add SQLite data to listView\n adapter.add(\"ID :- \" + id + \"\\n\" +\n \"Name :- \" + name + \"\\n\" +\n \"Number :- \" + num + \"\\n\\n\");\n\n\n }\n\n\n } else {\n\n adapter.add(\"No Data\");\n }\n cursor.close();\n\n\n } catch (Exception e) {\n e.printStackTrace();\n }\n\n\n \/\/ show the saved data in alertDialog\n AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);\n builder.setTitle(\"SQLite saved data\");\n builder.setIcon(R.mipmap.app_icon_foreground);\n builder.setAdapter(adapter, new DialogInterface.OnClickListener() {\n @Override\n public void onClick(DialogInterface dialog, int which) {\n\n }\n });\n\n builder.setPositiveButton(\"OK\", new DialogInterface.OnClickListener() {\n @Override\n public void onClick(DialogInterface dialog, int which) {\n\n dialog.cancel();\n }\n });\n\n AlertDialog dialog = builder.create();\n dialog.show();\n\n\n }\n });\n\n buttonUpdate.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n\n name = editTextName.getText().toString();\n\n String num = editTextNum.getText().toString();\n ID = editTextID.getText().toString();\n\n if (name.isEmpty() || num.isEmpty() || ID.isEmpty()) {\n\n Toast.makeText(MainActivity.this, \"Cannot Submit Empty Fields\", Toast.LENGTH_SHORT).show();\n } else {\n number = Integer.parseInt(num);\n\n\n try {\n \/\/ Update Data\n db.updateData(ID, name, number);\n\n \/\/ Clear the fields\n editTextID.getText().clear();\n editTextName.getText().clear();\n editTextNum.getText().clear();\n\n } catch (Exception e) {\n e.printStackTrace();\n }\n }\n\n\n }\n });\n\n buttonDelete.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n\n ID = editTextID.getText().toString();\n\n if (ID.isEmpty()) {\n\n Toast.makeText(MainActivity.this, \"Please enter the ID\", Toast.LENGTH_SHORT).show();\n } else {\n\n\n try {\n \/\/ Delete Data\n db.deleteData(ID);\n\n \/\/ Clear the fields\n editTextID.getText().clear();\n editTextName.getText().clear();\n editTextNum.getText().clear();\n\n } catch (Exception e) {\n e.printStackTrace();\n }\n }\n\n\n }\n });\n\n buttonDeleteAll.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n\n \/\/ Delete all data\n \/\/ You can simply delete all the data by calling this method --> db.deleteAllData();\n \/\/ You can try this also\n AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);\n builder.setIcon(R.mipmap.app_icon_foreground);\n builder.setTitle(\"Delete All Data\");\n builder.setCancelable(false);\n builder.setMessage(\"Do you really need to delete your all data ?\");\n builder.setPositiveButton(\"Yes\", new DialogInterface.OnClickListener() {\n @Override\n public void onClick(DialogInterface dialog, int which) {\n\n \/\/ User confirmed , now you can delete the data\n db.deleteAllData();\n\n \/\/ Clear the fields\n editTextID.getText().clear();\n editTextName.getText().clear();\n editTextNum.getText().clear();\n }\n });\n builder.setNegativeButton(\"No\", new DialogInterface.OnClickListener() {\n @Override\n public void onClick(DialogInterface dialog, int which) {\n\n \/\/ user not confirmed\n dialog.cancel();\n }\n });\n\n AlertDialog dialog = builder.create();\n dialog.show();\n\n }\n });\n\n buttonSearch.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n\n ID = editTextID.getText().toString();\n\n if (ID.isEmpty()) {\n\n Toast.makeText(MainActivity.this, \"Please enter the ID\", Toast.LENGTH_SHORT).show();\n } else {\n\n\n try {\n \/\/ Search data\n Cursor cursor = db.searchData(ID);\n if (cursor.moveToFirst()) {\n\n editTextName.setText(cursor.getString(1));\n editTextNum.setText(cursor.getString(2));\n Toast.makeText(MainActivity.this, \"Data successfully searched\", Toast.LENGTH_SHORT).show();\n\n } else {\n Toast.makeText(MainActivity.this, \"ID not found\", Toast.LENGTH_SHORT).show();\n\n editTextNum.setText(\"ID Not found\");\n editTextName.setText(\"ID not found\");\n\n }\n\n\n cursor.close();\n\n\n } catch (Exception e) {\n e.printStackTrace();\n\n }\n }\n\n }\n });\n }\n}","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4449\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4449\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4448","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4448\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4448\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4448\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4448","id":1260966129,"node_id":"I_kwDODunzps5LKNDx","number":4448,"title":"New Preprocessing Feature - Deduplication [Request]","user":{"login":"yuvalkirstain","id":57996478,"node_id":"MDQ6VXNlcjU3OTk2NDc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57996478?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yuvalkirstain","html_url":"https:\/\/github.com\/yuvalkirstain","followers_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/followers","following_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/orgs","repos_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/repos","events_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yuvalkirstain\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"},{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! The [datasets_sql](https:\/\/github.com\/mariosasko\/datasets_sql) package lets you easily find distinct rows in a dataset (an example with `SELECT DISTINCT` is in the readme). Deduplication is (still) not part of the official API because it's hard to implement for datasets bigger than RAM while only using the native PyArrow ops.\r\n\r\n(Btw, this is a duplicate of https:\/\/github.com\/huggingface\/datasets\/issues\/2514)"],"created_at":1654407176000,"updated_at":1654442067000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nMany large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time.\r\n\r\nA feature that allows one to easily deduplicate a dataset can be cool!\r\n\r\n**Describe the solution you'd like**\r\nWe can define a function and keep only the first\/last data-point that yields the value according to this function.\r\n\r\n**Describe alternatives you've considered**\r\nThe clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4448\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4448\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4447","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4447\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4447\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4447\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4447","id":1260041805,"node_id":"PR_kwDODunzps45E4A-","number":4447,"title":"Minor fixes\/improvements in `scene_parse_150` card","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654269754000,"updated_at":1654530625000,"closed_at":1654530097000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add `paperswithcode_id` and fix some links in the `scene_parse_150` card.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4447\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4447\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4447","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4447","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4447.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4447.patch","merged_at":1654530097000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4446","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4446\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4446\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4446\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4446","id":1260028995,"node_id":"PR_kwDODunzps45E1Qb","number":4446,"title":"Add missing kwargs to docstrings","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654269027000,"updated_at":1654272609000,"closed_at":1654272089000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4446\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4446\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4446","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4446","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4446.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4446.patch","merged_at":1654272089000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4445","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4445\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4445\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4445\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4445","id":1259947568,"node_id":"PR_kwDODunzps45EjtA","number":4445,"title":"Fix missing args in docstring of load_dataset_builder","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654264550000,"updated_at":1654266932000,"closed_at":1654266429000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other):\r\n- https:\/\/huggingface.co\/docs\/datasets\/v2.2.1\/en\/package_reference\/loading_methods#datasets.load_dataset_builder.path","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4445\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4445\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4445","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4445","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4445.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4445.patch","merged_at":1654266429000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4444","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4444\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4444\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4444\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4444","id":1259738209,"node_id":"PR_kwDODunzps45D2XX","number":4444,"title":"Fix kwargs in docstrings","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654252142000,"updated_at":1654254088000,"closed_at":1654253566000,"author_association":"MEMBER","active_lock_reason":null,"body":"To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards.\r\n\r\nSee:\r\n- huggingface\/doc-builder\/issues\/235","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4444\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4444\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4444","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4444","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4444.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4444.patch","merged_at":1654253566000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4443","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4443\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4443\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4443\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4443","id":1259606334,"node_id":"I_kwDODunzps5LFBE-","number":4443,"title":"Dataset Viewer issue for openclimatefix\/nimrod-uk-1km","user":{"login":"ZYMXIXI","id":32382826,"node_id":"MDQ6VXNlcjMyMzgyODI2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32382826?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZYMXIXI","html_url":"https:\/\/github.com\/ZYMXIXI","followers_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/followers","following_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/repos","events_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZYMXIXI\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If I understand correctly, this is due to the key `split` missing in the line https:\/\/huggingface.co\/datasets\/openclimatefix\/nimrod-uk-1km\/blob\/main\/nimrod-uk-1km.py#L41 of the script.\r\nMaybe @albertvillanova could confirm.","I'm having a look.","Indeed there are several issues in this dataset loading script.\r\n\r\nThe one pointed out by @severo: for the default configuration \"crops\": https:\/\/huggingface.co\/datasets\/openclimatefix\/nimrod-uk-1km\/blob\/main\/nimrod-uk-1km.py#L244\r\n- The download manager downloads `_URL`\r\n- But `_URL` is not defined: https:\/\/huggingface.co\/datasets\/openclimatefix\/nimrod-uk-1km\/blob\/main\/nimrod-uk-1km.py#L41\r\n ```python\r\n _URL = {'train': []}\r\n ```\r\n- Afterwards, for each split, a different key in `_ULR` is used, but it only contains one key: \"train\"\r\n - \"valid\" key: https:\/\/huggingface.co\/datasets\/openclimatefix\/nimrod-uk-1km\/blob\/main\/nimrod-uk-1km.py#L260\r\n - \"test key: https:\/\/huggingface.co\/datasets\/openclimatefix\/nimrod-uk-1km\/blob\/main\/nimrod-uk-1km.py#L269\r\n \r\nThese keys do not exist inside `_URL`, thus the error message reported in the viewer: \r\n```\r\nException: KeyError\r\nMessage: 'valid'\r\n```","Would anyone want to submit a Hub PR (or open a Discussion for the authors to be aware) to this dataset? https:\/\/huggingface.co\/datasets\/openclimatefix\/nimrod-uk-1km","Hi, I'm the main author for that dataset, so I'll work on updating it! I was working on debugging some stuff awhile ago, which is what broke it. ","I've opened a Discussion page, so that we can ask\/answer and propose fixes until the script works properly: https:\/\/huggingface.co\/datasets\/openclimatefix\/nimrod-uk-1km\/discussions\/1\r\n\r\nCC: @julien-c @jacobbieker "],"created_at":1654244236000,"updated_at":1654590232000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\n_No response_\n\n### Description\n\n_No response_\n\n### Owner\n\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4443\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4443\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4442","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4442\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4442\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4442\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4442","id":1258589276,"node_id":"I_kwDODunzps5LBIxc","number":4442,"title":"Dataset Viewer issue for amazon_polarity","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks, looking at it","Not sure what happened \ud83d\ude2c, but it's fixed"],"created_at":1654197518000,"updated_at":1654627837000,"closed_at":1654627837000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/amazon_polarity\/viewer\/amazon_polarity\/test\n\n### Description\n\nFor some reason the train split is OK but the test split is not for this dataset:\r\n\r\n```\r\nServer error\r\nStatus code: 400\r\nException: FileNotFoundError\r\nMessage: [Errno 2] No such file or directory: '\/cache\/modules\/datasets_modules\/datasets\/amazon_polarity\/__init__.py'\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4442\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4442\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4441","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4441\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4441\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4441\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4441","id":1258568656,"node_id":"I_kwDODunzps5LBDvQ","number":4441,"title":"Dataset Viewer issue for aeslc","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Not sure what happened \ud83d\ude2c, but it's fixed"],"created_at":1654196232000,"updated_at":1654627855000,"closed_at":1654627855000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/aeslc\n\n### Description\n\nThe dataset viewer can't find `dataset_infos.json` in it's cache:\r\n\r\n```\r\nServer error\r\nStatus code: 400\r\nException: FileNotFoundError\r\nMessage: [Errno 2] No such file or directory: '\/cache\/modules\/datasets_modules\/datasets\/aeslc\/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8\/dataset_infos.json'\r\n```\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4441\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4441\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4440","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4440\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4440\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4440\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4440","id":1258494469,"node_id":"PR_kwDODunzps44_io_","number":4440,"title":"Update docs around audio and vision","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","> Let me know what you think, especially if we should include some code samples for training a model in the audio\/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?\r\n\r\nWe plan to address this with end-to-end examples (for each modality) more focused on preprocessing than the ones in the Transformers docs."],"created_at":1654191723000,"updated_at":1656001999000,"closed_at":1656001382000,"author_association":"MEMBER","active_lock_reason":null,"body":"As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs.\r\n\r\nOther changes include:\r\n\r\n- Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's.\r\n- Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!).\r\n- Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect.\r\n- Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text.\r\n\r\nLet me know what you think, especially if we should include some code samples for training a model in the audio\/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4440\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":1,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4440\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4440","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4440","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4440.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4440.patch","merged_at":1656001382000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4439","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4439\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4439\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4439\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4439","id":1258434111,"node_id":"I_kwDODunzps5LAi4_","number":4439,"title":"TIMIT won't load after manual download: Errors about files that don't exist","user":{"login":"drscotthawley","id":13925685,"node_id":"MDQ6VXNlcjEzOTI1Njg1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13925685?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/drscotthawley","html_url":"https:\/\/github.com\/drscotthawley","followers_url":"https:\/\/api.github.com\/users\/drscotthawley\/followers","following_url":"https:\/\/api.github.com\/users\/drscotthawley\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/drscotthawley\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/drscotthawley\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/drscotthawley\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/drscotthawley\/orgs","repos_url":"https:\/\/api.github.com\/users\/drscotthawley\/repos","events_url":"https:\/\/api.github.com\/users\/drscotthawley\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/drscotthawley\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To have some context, please see:\r\n- #4145\r\n\r\nPlease, also note that we have recently made some fixes to the script, which are in our GitHub master branch but not yet released:\r\n- #4422\r\n- #4425 \r\n- #4436","Thanks Albert! I'll try pulling `datasets` from the git repo instead of PyPI, and\/or just wait for the next release.\r\n","I'm closing this issue then. Please, feel free to reopen it again if the problem persists."],"created_at":1654187756000,"updated_at":1654245857000,"closed_at":1654245856000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nI get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like \"**test*\" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT:\r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndata = load_dataset('timit_asr', 'clean')['train']\r\n```\r\n\r\n## Expected results\r\nThe dataset should load with no errors. \r\n\r\n## Actual results\r\nThis error message:\r\n```\r\n File \"\/home\/ubuntu\/envs\/data2vec\/lib\/python3.9\/site-packages\/datasets\/data_files.py\", line 201, in resolve_patterns_locally_or_by_urls\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at \/home\/ubuntu\/datasets\/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nBut this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with \"TEST\" in them have \".DOC\" extensions? ...I wonder, how was anyone able to get this to work in the first place?\r\n\r\nThe files in the dataset look like the following: \r\n```\r\n\u00b3 PHONCODE.DOC\r\n\u00b3 PROMPTS.TXT\r\n\u00b3 SPKRINFO.TXT\r\n\u00b3 SPKRSENT.TXT\r\n\u00b3 TESTSET.DOC\r\n```\r\n...so why are these being excluded by the dataset loader? \r\n\r\n\r\n## Environment info\r\n- `datasets` version: 2.2.2\r\n- Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27\r\n- Python version: 3.9.9\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4439\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4439\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4438","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4438\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4438\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4438\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4438","id":1258255394,"node_id":"PR_kwDODunzps44-vhC","number":4438,"title":"Fix docstring of inspect_dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654179670000,"updated_at":1654188055000,"closed_at":1654187547000,"author_association":"MEMBER","active_lock_reason":null,"body":"As pointed out by @sgugger:\r\n- huggingface\/doc-builder\/issues\/235","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4438\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4438\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4438","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4438","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4438.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4438.patch","merged_at":1654187547000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4437","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4437\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4437\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4437\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4437","id":1258249582,"node_id":"PR_kwDODunzps44-uRW","number":4437,"title":"Add missing columns to `blended_skill_talk`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654179386000,"updated_at":1654530596000,"closed_at":1654530085000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https:\/\/github.com\/facebookresearch\/ParlAI\/blob\/main\/parlai\/tasks\/blended_skill_talk\/build.py).\r\n\r\nFix #4426 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4437\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4437\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4437","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4437","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4437.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4437.patch","merged_at":1654530085000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4436","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4436\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4436\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4436\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4436","id":1257758834,"node_id":"PR_kwDODunzps449FsU","number":4436,"title":"Fix directory names for LDC data in timit_asr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654152304000,"updated_at":1654162376000,"closed_at":1654161867000,"author_association":"MEMBER","active_lock_reason":null,"body":"Related to:\r\n- #4422","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4436\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4436\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4436","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4436","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4436.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4436.patch","merged_at":1654161867000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4435","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4435\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4435\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4435\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4435","id":1257496552,"node_id":"I_kwDODunzps5K89_o","number":4435,"title":"Load a local cached dataset that has been modified","user":{"login":"mihail911","id":2789441,"node_id":"MDQ6VXNlcjI3ODk0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2789441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mihail911","html_url":"https:\/\/github.com\/mihail911","followers_url":"https:\/\/api.github.com\/users\/mihail911\/followers","following_url":"https:\/\/api.github.com\/users\/mihail911\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mihail911\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mihail911\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mihail911\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mihail911\/orgs","repos_url":"https:\/\/api.github.com\/users\/mihail911\/repos","events_url":"https:\/\/api.github.com\/users\/mihail911\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mihail911\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! `datasets` caches every modification\/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.","Awesome, hvala Mario! This works. "],"created_at":1654134709000,"updated_at":1654214366000,"closed_at":1654214358000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI have loaded a dataset as follows:\r\n```\r\nd = load_dataset(\"emotion\", split=\"validation\")\r\n```\r\nAfterwards I make some modifications to the dataset via a `map` call:\r\n```\r\nd.map(some_update_func, cache_file_name=modified_dataset)\r\n```\r\nThis generates a cached version of the dataset on my local system in the same directory as the original download of the data (\/path\/to\/cache). Running an `ls` returns:\r\n```\r\nmodified_dataset\r\ndataset_info.json \r\nemotion-test.arrow \r\nemotion-train.arrow \r\nemotion-validation.arrow\r\n```\r\nas expected. However, when I try to load up the modified cached dataset via a call to \r\n```\r\nmodified = load_dataset(\"emotion\", split=\"validation\", data_files=\"\/path\/to\/cache\/modified_dataset\") \r\n```\r\nit simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset:\r\n```\r\nUsing custom data configuration validation-cdbf51685638421b\r\nDownloading and preparing dataset emotion\/validation to ...\r\n```\r\n\r\nHow am I supposed to load the original modified local cache copy of the dataset?\r\n\r\n## Environment info\r\n- `datasets` version: 2.2.2\r\n- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.13\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4435\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4435\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4434","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4434\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4434\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4434\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4434","id":1256207321,"node_id":"PR_kwDODunzps443mAr","number":4434,"title":"Fix dummy dataset generation script for handling nested types of _URLs","user":{"login":"silverriver","id":2529049,"node_id":"MDQ6VXNlcjI1MjkwNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2529049?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/silverriver","html_url":"https:\/\/github.com\/silverriver","followers_url":"https:\/\/api.github.com\/users\/silverriver\/followers","following_url":"https:\/\/api.github.com\/users\/silverriver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/silverriver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/silverriver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/silverriver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/silverriver\/orgs","repos_url":"https:\/\/api.github.com\/users\/silverriver\/repos","events_url":"https:\/\/api.github.com\/users\/silverriver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/silverriver\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1654095195000,"updated_at":1654603708000,"closed_at":1654593849000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset.\r\n\r\nI think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types.\r\n\r\nLinked to issue #4428 \r\n\r\nPS: I am not sure whether my code fix this issue in a proper way.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4434\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4434\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4434","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4434","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4434.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4434.patch","merged_at":1654593849000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4433","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4433\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4433\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4433\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4433","id":1255830758,"node_id":"PR_kwDODunzps442P5L","number":4433,"title":"Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Added back the `[:]` and a comment to explain why this is needed. "],"created_at":1654085396000,"updated_at":1654770894000,"closed_at":1654770367000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4348","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4433\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":1,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4433\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4433","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4433","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4433.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4433.patch","merged_at":1654770366000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4432","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4432\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4432\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4432\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4432","id":1255523720,"node_id":"PR_kwDODunzps441JmK","number":4432,"title":"Fix builder docstring","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654076730000,"updated_at":1654191827000,"closed_at":1654191315000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, the args of `DatasetBuilder` do not appear in the docs: https:\/\/huggingface.co\/docs\/datasets\/v2.1.0\/en\/package_reference\/builder_classes#datasets.DatasetBuilder","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4432\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4432\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4432","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4432","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4432.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4432.patch","merged_at":1654191315000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4431","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4431\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4431\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4431\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4431","id":1254618948,"node_id":"PR_kwDODunzps44x5aG","number":4431,"title":"Add personaldialog datasets","user":{"login":"silverriver","id":2529049,"node_id":"MDQ6VXNlcjI1MjkwNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2529049?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/silverriver","html_url":"https:\/\/github.com\/silverriver","followers_url":"https:\/\/api.github.com\/users\/silverriver\/followers","following_url":"https:\/\/api.github.com\/users\/silverriver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/silverriver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/silverriver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/silverriver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/silverriver\/orgs","repos_url":"https:\/\/api.github.com\/users\/silverriver\/repos","events_url":"https:\/\/api.github.com\/users\/silverriver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/silverriver\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["These test errors are related to issue #4428 \r\n","_The documentation is not available anymore as the PR was closed or merged._","I only made a trivial modification in my commit https:\/\/github.com\/huggingface\/datasets\/pull\/4431\/commits\/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue.","> Awesome thanks for adding this dataset :)\r\n> \r\n> I just have one comment about the licensing.\r\n> \r\n> Also it seems that you already have the dataset in https:\/\/huggingface.co\/datasets\/silver\/personal_dialog, so it's unnecessary to add it here\r\n\r\nThank you very much for your comment.\r\n\r\nSo, should I close this PR?","Thanks for fixing the licensing section :)\r\n\r\n> So, should I close this PR?\r\n\r\nYes you can close this PR, it's better if your dataset is under your namespace at https:\/\/huggingface.co\/datasets\/silver\/personal_dialog :)\r\n\r\nDon't forget to update the licensing section on https:\/\/huggingface.co\/datasets\/silver\/personal_dialog as well"],"created_at":1654046440000,"updated_at":1654951223000,"closed_at":1654950676000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"It seems that all tests are passed","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4431\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4431\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4431","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4431","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4431.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4431.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4430","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4430\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4430\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4430\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4430","id":1254412591,"node_id":"I_kwDODunzps5KxNEv","number":4430,"title":"Add ability to load newer, cleaner version of Multi-News ","user":{"login":"JohnGiorgi","id":8917831,"node_id":"MDQ6VXNlcjg5MTc4MzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8917831?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JohnGiorgi","html_url":"https:\/\/github.com\/JohnGiorgi","followers_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/followers","following_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/orgs","repos_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/repos","events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.","@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a zip but as three files (train\/dev\/test). How is this usually handled in HF Datasets, should `_URL` be a dict with keys `train`, `val`, `test` perhaps?","Yes! Let me help you with more detailed instructions.\r\n\r\nIn the first step, we need to update the URLs. One of the possible dictionary structures is as follows:\r\n```python\r\n_URLs = {\r\n \"train\": {\"src\": \"https:\/\/drive.google.com\/uc?export=download&id=1wHAWDOwOoQWSj7HYpyJ3Aeud8WhhaJ7P\", \"tgt\": \"https:\/\/drive.google.com\/uc?export=download&id=1QVgswwhVTkd3VLCzajK6eVkcrSWEK6kq\"}\r\n \"val\": ...\r\n \"test\": ...\r\n}\r\n```\r\n\r\n(You can use this page to generate direct download links: https:\/\/sites.google.com\/site\/gdocs2direct\/)\r\n\r\nThen we move to the `split_generators` method:\r\n```python\r\ndef _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n files = dl_manager.download(_URLs)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\"src_file\": files[\"train\"][\"src\"], \"tgt_file\": files[\"train\"][\"tgt\"]},\r\n ),\r\n ... # same for val and test\r\n ]\r\n```\r\nFinally, we adjust the signature of `_generate_examples`:\r\n```python\r\ndef _generate_examples(self, src_file, tgt_file):\r\n \"\"\"Yields examples.\"\"\"\r\n with open(src_file, encoding=\"utf-8\") as src_f, open(\r\n tgt_file, encoding=\"utf-8\"\r\n ) as tgt_f:\r\n ... # the rest is the same\r\n```\r\n\r\nAnd that's it!\r\n\r\nPS: Let me know if you need help updating the dummy data and regenerating the metadata file.","Awesome! Thanks for the detailed help, that was straightforward with your instruction. However, I think I am being blocked by this issue: https:\/\/github.com\/huggingface\/datasets\/issues\/4428","Feel free to open a PR, and I can fix this manually.","Awsome, done in #4451!"],"created_at":1654030844000,"updated_at":1654622084000,"closed_at":1654622084000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nThe [Multi-News dataloader points to the original version of the Multi-News dataset](https:\/\/github.com\/huggingface\/datasets\/blob\/12540dd75015678ec6019f258d811ee107439a73\/datasets\/multi_news\/multi_news.py#L47), but this has [known errors in it](https:\/\/github.com\/Alex-Fabbri\/Multi-News\/issues\/11). There exists a [newer version which fixes some of these issues](https:\/\/drive.google.com\/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq).\r\n\r\nUnfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility.\r\n\r\n**Describe the solution you'd like**\r\n\r\nAdd a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nReplace the current URL to the original version to the dataset with the URL to the version with fixes.\r\n\r\n**Additional context**\r\n\r\nWould be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4430\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4430\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4429","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4429\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4429\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4429\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4429","id":1254184358,"node_id":"PR_kwDODunzps44whxN","number":4429,"title":"Update builder docstring for deprecated\/added arguments","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@mishig25 is investigating why deprecated\/added do not affect the enclosed text format when used in args docstring: no special formatting appears: \r\n- https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4429\/en\/package_reference\/builder_classes#datasets.DatasetBuilder","@albertvillanova please check now \ud83d\udc4d \r\nhttps:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4429\/en\/package_reference\/builder_classes#datasets.DatasetBuilder\r\n\r\n\"Screenshot\r\n","Thanks @mishig25.\r\n\r\nJust one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?","> Just one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?\r\n\r\nYes, that is expected \ud83d\ude0a because the depreacted box is being bounded by its parent box (the box for `name` argument in the screenshot above)"],"created_at":1654018645000,"updated_at":1654688418000,"closed_at":1654687905000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR updates the builder docstring with deprecated\/added directives for arguments name\/config_name.\r\n\r\nFollow up of:\r\n- #4414 \r\n- huggingface\/doc-builder#233\r\n\r\nFirst merge:\r\n- #4432","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4429\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4429\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4429","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4429","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4429.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4429.patch","merged_at":1654687905000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4428","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4428\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4428\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4428\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4428","id":1254092818,"node_id":"I_kwDODunzps5Kv_AS","number":4428,"title":"Errors when building dummy data if you use nested _URLS","user":{"login":"silverriver","id":2529049,"node_id":"MDQ6VXNlcjI1MjkwNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2529049?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/silverriver","html_url":"https:\/\/github.com\/silverriver","followers_url":"https:\/\/api.github.com\/users\/silverriver\/followers","following_url":"https:\/\/api.github.com\/users\/silverriver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/silverriver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/silverriver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/silverriver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/silverriver\/orgs","repos_url":"https:\/\/api.github.com\/users\/silverriver\/repos","events_url":"https:\/\/api.github.com\/users\/silverriver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/silverriver\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1654013457000,"updated_at":1654593849000,"closed_at":1654593849000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nWhen making dummy data with the `datasets-cli dummy_data` tool,\r\nan error will be raised if you use a nested _URLS in your dataset script.\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/name\/LCCC\/datasets\/src\/datasets\/commands\/datasets_cli.py\", line 43, in \r\n main()\r\n File \"\/home\/name\/LCCC\/datasets\/src\/datasets\/commands\/datasets_cli.py\", line 39, in main\r\n service.run()\r\n File \"\/home\/name\/LCCC\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 311, in run\r\n self._autogenerate_dummy_data(\r\n File \"\/home\/name\/LCCC\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 337, in _autogenerate_dummy_data\r\n dataset_builder._split_generators(dl_manager)\r\n File \"\/home\/name\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/personal_dialog\/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77\/personal_dialog.py\", line 108, in _split_generators\r\n data_dir = dl_manager.download_and_extract(urls)\r\n File \"\/home\/name\/LCCC\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 56, in download_and_extract\r\n dummy_output = self.mock_download_manager.download(url_or_urls)\r\n File \"\/home\/name\/LCCC\/datasets\/src\/datasets\/download\/mock_download_manager.py\", line 130, in download\r\n return self.download_and_extract(data_url)\r\n File \"\/home\/name\/LCCC\/datasets\/src\/datasets\/download\/mock_download_manager.py\", line 122, in download_and_extract\r\n return self.create_dummy_data_dict(dummy_file, data_url)\r\n File \"\/home\/name\/LCCC\/datasets\/src\/datasets\/download\/mock_download_manager.py\", line 165, in create_dummy_data_dict\r\n if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):\r\nTypeError: unhashable type: 'list'\r\n\r\n\r\n## Steps to reproduce the bug\r\n\r\nYou can use my dataset script implemented here:\r\nhttps:\/\/github.com\/silverriver\/datasets\/blob\/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd\/datasets\/personal_dialog\/personal_dialog.py\r\n\r\n```python\r\ndatasets_cli dummy_data datasets\/personal_dialog --auto_generate\r\n```\r\n\r\nYou can change https:\/\/github.com\/silverriver\/datasets\/blob\/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd\/datasets\/personal_dialog\/personal_dialog.py#L54\r\nto \r\n\r\n```\r\n\"train\": \"https:\/\/huggingface.co\/datasets\/silver\/personal_dialog\/resolve\/main\/dev_random.jsonl.gz\"\r\n```\r\n\r\nbefore runing the above script to avoid downloading a large training data.\r\n\r\n\r\n## Expected results\r\nThe dummy data should be generated\r\n\r\n## Actual results\r\nAn error is raised.\r\n\r\nIt seems that in https:\/\/github.com\/huggingface\/datasets\/blob\/12540dd75015678ec6019f258d811ee107439a73\/src\/datasets\/download\/mock_download_manager.py#L165\r\nWe only check if the first item of dummy_data_dict.values() is str.\r\nHowever, dummy_data_dict.values() may have the type of [str, list, list].\r\nA simple fix would be changing https:\/\/github.com\/huggingface\/datasets\/blob\/12540dd75015678ec6019f258d811ee107439a73\/src\/datasets\/download\/mock_download_manager.py#L165 to\r\n\r\n```python\r\nif all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):\r\n```\r\n\r\nBut I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here.\r\n\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform: Linux\r\n- Python version: Python 3.9.10\r\n- PyArrow version: 7.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4428\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4428\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4427","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4427\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4427\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4427\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4427","id":1253959313,"node_id":"PR_kwDODunzps44vyGg","number":4427,"title":"Add HF.co for PRs\/Issues for specific datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1654007481000,"updated_at":1654087062000,"closed_at":1654086542000,"author_association":"MEMBER","active_lock_reason":null,"body":"As in https:\/\/github.com\/huggingface\/transformers\/pull\/17485, issues and PR for datasets under a namespace have to be on the HF Hub","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4427\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4427\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4427","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4427","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4427.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4427.patch","merged_at":1654086542000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4426","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4426\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4426\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4426\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4426","id":1253887311,"node_id":"I_kwDODunzps5KvM1P","number":4426,"title":"Add loading variable number of columns for different splits","user":{"login":"DrMatters","id":22641583,"node_id":"MDQ6VXNlcjIyNjQxNTgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22641583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DrMatters","html_url":"https:\/\/github.com\/DrMatters","followers_url":"https:\/\/api.github.com\/users\/DrMatters\/followers","following_url":"https:\/\/api.github.com\/users\/DrMatters\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DrMatters\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DrMatters\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DrMatters\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DrMatters\/orgs","repos_url":"https:\/\/api.github.com\/users\/DrMatters\/repos","events_url":"https:\/\/api.github.com\/users\/DrMatters\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DrMatters\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. "],"created_at":1654004416000,"updated_at":1654273525000,"closed_at":1654273525000,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nThe original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test\/valid) splits have additional data column `label_candidates` that the (train) doesn't have.\r\nWhen loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4426\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4426\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4425","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4425\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4425\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4425\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4425","id":1253641604,"node_id":"PR_kwDODunzps44uuDq","number":4425,"title":"Make extensions case-insensitive in timit_asr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653991804000,"updated_at":1654092930000,"closed_at":1654092411000,"author_association":"MEMBER","active_lock_reason":null,"body":"Related to #4422.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4425\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4425\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4425","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4425","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4425.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4425.patch","merged_at":1654092411000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4424","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4424\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4424\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4424\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4424","id":1253542488,"node_id":"PR_kwDODunzps44uZBD","number":4424,"title":"Fix DuplicatedKeysError in timit_asr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653986865000,"updated_at":1654005050000,"closed_at":1654004551000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix #4422.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4424\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4424\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4424","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4424","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4424.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4424.patch","merged_at":1654004551000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4423","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4423\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4423\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4423\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4423","id":1253326023,"node_id":"PR_kwDODunzps44trdP","number":4423,"title":"Add new dataset MMChat","user":{"login":"silverriver","id":2529049,"node_id":"MDQ6VXNlcjI1MjkwNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2529049?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/silverriver","html_url":"https:\/\/github.com\/silverriver","followers_url":"https:\/\/api.github.com\/users\/silverriver\/followers","following_url":"https:\/\/api.github.com\/users\/silverriver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/silverriver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/silverriver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/silverriver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/silverriver\/orgs","repos_url":"https:\/\/api.github.com\/users\/silverriver\/repos","events_url":"https:\/\/api.github.com\/users\/silverriver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/silverriver\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks ! As for https:\/\/github.com\/huggingface\/datasets\/pull\/4431 please also update the licensing section in https:\/\/huggingface.co\/datasets\/silver\/mmchat ;)\r\n\r\nThen if it's fine for you feel free to close this PR"],"created_at":1653972307000,"updated_at":1654951252000,"closed_at":1654950702000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hi, I am adding a new dataset MMChat. \r\nIt seems that all tests are passed","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4423\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4423\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4423","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4423","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4423.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4423.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4422","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4422\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4422\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4422\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4422","id":1253146511,"node_id":"I_kwDODunzps5KsX-P","number":4422,"title":"Cannot load timit_asr data set","user":{"login":"bhaddow","id":992795,"node_id":"MDQ6VXNlcjk5Mjc5NQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/992795?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhaddow","html_url":"https:\/\/github.com\/bhaddow","followers_url":"https:\/\/api.github.com\/users\/bhaddow\/followers","following_url":"https:\/\/api.github.com\/users\/bhaddow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhaddow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhaddow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhaddow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhaddow\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhaddow\/repos","events_url":"https:\/\/api.github.com\/users\/bhaddow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhaddow\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @bhaddow.\r\n\r\nI'm fixing it.","Thanks for the quick fix!","@bhaddow we have also made a fix so that you don't have to convert to uppercase the file extensions of the LDC data.\r\n\r\nWould you mind checking if it works OK now for you and reporting if there are any issues? Thanks. ","Hi @albertvillanova -It loads fine on a copy of the data from deepai - although I have to remove the copies of the .WAV files (with extension .WAV,wav). On a copy of the data that was obtained from the LDC, the glob still fails to find the files. The LDC copy looks like it was copied from CD, in 2004, so the structure may be different to a current download.","Ah, if I change the train\/ and test\/ directories to TRAIN\/ and TEST\/ then it works!","Thanks for your investigation and report, @bhaddow. I'm adding another fix for the TRAIN\/train and TEST\/test directory names."],"created_at":1653948022000,"updated_at":1654151645000,"closed_at":1654004551000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a \"duplicate key\" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ntimit = datasets.load_dataset(\"timit_asr\", data_dir = \"\/path\/to\/dataset\")\r\n# Sample code to reproduce the bug\r\n```\r\n\r\n## Expected results\r\n\r\nThe data set should load without error. It worked for me before the LDC url change.\r\n\r\n## Actual results\r\n```\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: SA1\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- `datasets` version: 2.2.2\r\n- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.12\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4422\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4422\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4421","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4421\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4421\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4421\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4421","id":1253059467,"node_id":"PR_kwDODunzps44szxR","number":4421,"title":"Add extractor for bzip2-compressed files","user":{"login":"asivokon","id":2910707,"node_id":"MDQ6VXNlcjI5MTA3MDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2910707?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/asivokon","html_url":"https:\/\/github.com\/asivokon","followers_url":"https:\/\/api.github.com\/users\/asivokon\/followers","following_url":"https:\/\/api.github.com\/users\/asivokon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/asivokon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/asivokon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/asivokon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/asivokon\/orgs","repos_url":"https:\/\/api.github.com\/users\/asivokon\/repos","events_url":"https:\/\/api.github.com\/users\/asivokon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/asivokon\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1653938380000,"updated_at":1654528970000,"closed_at":1654528970000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This change enables loading bzipped datasets, just like any other compressed dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4421\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4421\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4421","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4421","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4421.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4421.patch","merged_at":1654528969000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4420","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4420\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4420\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4420\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4420","id":1252739239,"node_id":"I_kwDODunzps5Kq0in","number":4420,"title":"Metric evaluation problems in multi-node, shared file system","user":{"login":"gullabi","id":40303490,"node_id":"MDQ6VXNlcjQwMzAzNDkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40303490?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gullabi","html_url":"https:\/\/github.com\/gullabi","followers_url":"https:\/\/api.github.com\/users\/gullabi\/followers","following_url":"https:\/\/api.github.com\/users\/gullabi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gullabi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gullabi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gullabi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gullabi\/orgs","repos_url":"https:\/\/api.github.com\/users\/gullabi\/repos","events_url":"https:\/\/api.github.com\/users\/gullabi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gullabi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If you call `metric.compute` in a distributed setup like yours, then `metric.compute` is called in each process. `metric.compute` first calls `metric.add_batch`, and it looks like your error appears at that stage.\r\n\r\nTo make sure that all the processes have started writing their predictions\/references at the same time, each process waits for process 0 to lock `slurm-{world_size}-0.arrow.lock`. Process 0 locks this file when `metric.add_batch` is called, so here when `metric.compute` is called.\r\n\r\nTherefore your error can happen when process 0 takes too much time to call `metric.compute` compared to process 3 (>100 seconds by default). I haven't tried running your code but could it be the case ?\r\n\r\nI guess it could also happen if you run multiple times the same distributed job at the same time with the same `experiment_id` because they would collide.\r\n","We've finally been able to isolate the problem, it wasn't a timing problem, but rather a file locking one. \r\nThe locks produced by calling `flock` where not visible between nodes (so the master node couldn't check other node's locks nor the other way around). \r\n\r\nWe are now having issues with the pre-processing in our runner script, but are not related with the rendezvous process during the evaluation phase. We will let you know about it once we address it. \r\n\r\nOur solution to the rendezvous is as follows:\r\n- We solved the problem by calling `lockf` instead of `flock`.\r\n- We had to change slightly the `_check_all_processes_locks` method so that the main process (i.e. process 0) didn't check it's own lock (because `lockf` permits recursive locks and thus checking it only replaced the current lock with a new one). \r\n\r\nWe use a shared file system between nodes using GPFS in our cluster setup. Maybe the difference between the behavior we see with respect to your usage in multi-node executions comes from that fact. Which file system scheme do you use for the multi-node executions? \r\n\r\n`lockf` seems to work in more settings than `flock`, so maybe we could write a PR so you could test it in your environment. ","Cool, I'm glad you managed to make evaluation work :)\r\n\r\nI'm not completely aware of the differences between lockf and flock, but I've read somewhere that flock is preferable over lockf in multithreading and multiprocessing situations. Here we definitely are in such a situation so unless it is super important I don't think we will switch to lockf"],"created_at":1653917045000,"updated_at":1654693385000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nMetric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https:\/\/github.com\/huggingface\/transformers\/issues\/17412)\r\n\r\n## Steps to reproduce the bug\r\n\r\n1. clone [this huggingface model](https:\/\/huggingface.co\/PereLluis13\/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https:\/\/gist.github.com\/gullabi\/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py).\r\n2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0`\r\n3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https:\/\/gist.github.com\/gullabi\/3f66094caa8db1c1e615dd35bd67ec71)\r\n\r\nSpecifically for the datasets, for the distributed setup the `load_metric` is called as:\r\n```\r\n process_id=int(os.environ[\"RANK\"])\r\n num_process=int(os.environ[\"WORLD_SIZE\"])\r\n eval_metrics = {metric: load_metric(metric,\r\n process_id=process_id,\r\n num_process=num_process,\r\n experiment_id=\"slurm\")\r\n for metric in data_args.eval_metrics}\r\n```\r\n\r\n## Expected results\r\nThe training should not fail, due to the failure of the `Metric.compute()` step.\r\n\r\n## Actual results\r\nFor the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files \r\n```\r\n File \"\/gpfs\/projects\/bsc88\/speech\/asr\/wav2vec2-xls-r-300m-ca-lm\/run_speech_recognition_ctc.py\", line 841, in \r\n main()\r\n File \"\/gpfs\/projects\/bsc88\/speech\/asr\/wav2vec2-xls-r-300m-ca-lm\/run_speech_recognition_ctc.py\", line 792, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"\/gpfs\/projects\/bsc88\/projects\/speech-tech-resources\/venv_amd_speech\/lib\/python3.7\/site-packages\/transformers\/trainer.py\", line 1497, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"\/gpfs\/projects\/bsc88\/projects\/speech-tech-resources\/venv_amd_speech\/lib\/python3.7\/site-packages\/transformers\/trainer.py\", line 1624, in _maybe_log_save_evaluate\r\n metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)\r\n File \"\/gpfs\/projects\/bsc88\/projects\/speech-tech-resources\/venv_amd_speech\/lib\/python3.7\/site-packages\/transformers\/trainer.py\", line 2291, in evaluate\r\n metric_key_prefix=metric_key_prefix,\r\n File \"\/gpfs\/projects\/bsc88\/projects\/speech-tech-resources\/venv_amd_speech\/lib\/python3.7\/site-packages\/transformers\/trainer.py\", line 2535, in evaluation_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))\r\n File \"\/gpfs\/projects\/bsc88\/speech\/asr\/wav2vec2-xls-r-300m-ca-lm\/run_speech_recognition_ctc.py\", line 742, in compute_metrics\r\n metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}\r\n File \"\/gpfs\/projects\/bsc88\/speech\/asr\/wav2vec2-xls-r-300m-ca-lm\/run_speech_recognition_ctc.py\", line 742, in \r\n metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}\r\n File \"\/gpfs\/projects\/bsc88\/projects\/speech-tech-resources\/venv_amd_speech\/lib\/python3.7\/site-packages\/datasets\/metric.py\", line 419, in compute\r\n self.add_batch(**inputs)\r\n File \"\/gpfs\/projects\/bsc88\/projects\/speech-tech-resources\/venv_amd_speech\/lib\/python3.7\/site-packages\/datasets\/metric.py\", line 465, in add_batch\r\n self._init_writer()\r\n File \"\/gpfs\/projects\/bsc88\/projects\/speech-tech-resources\/venv_amd_speech\/lib\/python3.7\/site-packages\/datasets\/metric.py\", line 552, in _init_writer\r\n self._check_rendez_vous() # wait for master to be ready and to let everyone go\r\n File \"\/gpfs\/projects\/bsc88\/projects\/speech-tech-resources\/venv_amd_speech\/lib\/python3.7\/site-packages\/datasets\/metric.py\", line 342, in _check_rendez_vous\r\n ) from None\r\nValueError: Expected to find locked file \/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-0.arrow.lock from process 3 but it doesn't exist.\r\n```\r\n\r\nWhen I look at the cache directory, I can see all the lock files in principle:\r\n```\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-0.arrow\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-0.arrow.lock\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-1.arrow\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-1.arrow.lock\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-2.arrow\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-2.arrow.lock\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-3.arrow\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-3.arrow.lock\r\n\/home\/bsc88\/bsc88474\/.cache\/huggingface\/metrics\/wer\/default\/slurm-4-rdv.lock\r\n```\r\n\r\nI see that there was another related issue here https:\/\/github.com\/huggingface\/datasets\/issues\/1942, but it seems to have resolved via https:\/\/github.com\/huggingface\/datasets\/pull\/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps.\r\n\r\n## Environment info\r\n- `datasets` version: 2.0.0\r\n- Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core\r\n- Python version: 3.7.4\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.3.0\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4420\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4420\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4419","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4419\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4419\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4419\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4419","id":1252652896,"node_id":"I_kwDODunzps5Kqfdg","number":4419,"title":"Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual`","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.","Hi @mariosasko, right! I'll update the issue title\/desc with `assertTupleEqual` even though as you said it seems to be internally using `assertEqual` so I'm not sure whether it's worth it or not...\r\n\r\nhttps:\/\/docs.python.org\/3\/library\/unittest.html#unittest.TestCase.assertTupleEqual","I thought we were supposed to move gradually from `unittest` to `pytest`..."],"created_at":1653912798000,"updated_at":1655286304000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nSo this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https:\/\/docs.python.org\/3\/library\/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.\r\n\r\nFind an example of an `assertEqual` over a tuple in \ud83e\udd17 `datasets` unit tests over an `ArrowDataset` at https:\/\/github.com\/huggingface\/datasets\/blob\/0bb47271910c8a0b628dba157988372307fca1d2\/tests\/test_arrow_dataset.py#L570\r\n\r\n**Describe the solution you'd like**\r\n\r\nStart slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.\r\n\r\n**Additional context**\r\n\r\nIf so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks \ud83e\udd17 \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4419\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4419\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4418","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4418\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4418\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4418\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4418","id":1252506268,"node_id":"PR_kwDODunzps44q9pG","number":4418,"title":"Add dataset MMChat","user":{"login":"silverriver","id":2529049,"node_id":"MDQ6VXNlcjI1MjkwNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2529049?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/silverriver","html_url":"https:\/\/github.com\/silverriver","followers_url":"https:\/\/api.github.com\/users\/silverriver\/followers","following_url":"https:\/\/api.github.com\/users\/silverriver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/silverriver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/silverriver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/silverriver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/silverriver\/orgs","repos_url":"https:\/\/api.github.com\/users\/silverriver\/repos","events_url":"https:\/\/api.github.com\/users\/silverriver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/silverriver\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1653905440000,"updated_at":1653922698000,"closed_at":1653922698000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4418\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4418\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4418","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4418","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4418.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4418.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4417","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4417\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4417\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4417\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4417","id":1251933091,"node_id":"I_kwDODunzps5Knvuj","number":4417,"title":"how to convert a dict generator into a huggingface dataset. ","user":{"login":"StephennFernandes","id":32235549,"node_id":"MDQ6VXNlcjMyMjM1NTQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32235549?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/StephennFernandes","html_url":"https:\/\/github.com\/StephennFernandes","followers_url":"https:\/\/api.github.com\/users\/StephennFernandes\/followers","following_url":"https:\/\/api.github.com\/users\/StephennFernandes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/StephennFernandes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/StephennFernandes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/StephennFernandes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/StephennFernandes\/orgs","repos_url":"https:\/\/api.github.com\/users\/StephennFernandes\/repos","events_url":"https:\/\/api.github.com\/users\/StephennFernandes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/StephennFernandes\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["@albertvillanova @lhoestq , could you please help me on this issue. ","Hi ! As mentioned on the [forum](https:\/\/discuss.huggingface.co\/t\/how-to-wrap-a-generator-with-hf-dataset\/18464), the simplest for now would be to define a [dataset script](https:\/\/huggingface.co\/docs\/datasets\/dataset_script) which can contain your generator. But we can also explore adding something like `ds = Dataset.from_iterable(seqio_dataset)`","@lhoestq , hey i did as you instructed, but sadly i cannot get pass through the download_manager, as i dont have anything to download. i was skipping the ` def _split_generators(self, dl_manager):` function. but i cannot get around it. I get a `NotImplementedError: `\r\n\r\nthe following is my code for the same: \r\n\r\n\r\n\r\n```\r\nimport datasets \r\nimport functools\r\nimport glob \r\nfrom datasets import load_from_disk\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_dataset\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\ndata_path = glob.glob(\"\/home\/stephen\/Desktop\/MEGA_CORPUS\/COMBINED_CORPUS\/*\", recursive=False)\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_path=None):\r\n dataset = load_from_disk(dataset_path)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_path=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)\r\n )\r\n\r\n@utils.map_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\n\r\n_CITATION = \"Not ready yet\"\r\n_DESCRIPTION = \"a custom seqio based mixed samples on a given temperature value, that again returns a dataset in HF dataset format well samples on the Mixture temperature\"\r\n_HOMEPAGE = \"ldcil.org\"\r\n\r\nclass CustomSeqio(datasets.GeneratorBasedBuilder):\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n homepage=\"https:\/\/ldcil.org\",\r\n citation=_CITATION,)\r\n\r\ndef generate_examples(self):\r\n seqio_train_list = []\r\n for lang in data_path:\r\n dataset_name = lang.split(\"\/\")[-1]\r\n dataset_shapes = None \r\n\r\n TaskRegistry.add(\r\n str(dataset_name),\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_path=lang),\r\n splits=(\"train\", \"test\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\n preprocessors=[\r\n functools.partial(\r\n target_to_key, key_map={\r\n \"targets\": None,\r\n }, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n )\r\n\r\n seqio_train_dataset = seqio.get_mixture_or_task(dataset_name).get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n seqio_train_list.append(seqio_train_dataset)\r\n \r\n lang_name_list = []\r\n for lang in data_path:\r\n lang_name = lang.split(\"\/\")[-1]\r\n lang_name_list.append(lang_name)\r\n\r\n seqio_mixture = seqio.MixtureRegistry.add(\r\n \"seqio_mixture\",\r\n lang_name_list,\r\n default_rate=0.7)\r\n \r\n seqio_mixture_dataset = seqio.get_mixture_or_task(\"seqio_mixture\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n\r\n for id, ex in enumerate(seqio_mixture_dataset):\r\n yield id, {\"text\": ex[\"targets\"].numpy().decode()}\r\n```\r\n\r\nand i load it by:\r\n\r\n`seqio_mixture = load_dataset(\"seqio_loader\")`","@lhoestq , just to make things clear ... \r\n\r\nthe following is my original code, thats not in the HF dataset loading script: \r\n\r\n```\r\nimport functools\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_from_disk\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\nimport glob \r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_path=None):\r\n dataset = load_from_disk(dataset_path)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_path=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)\r\n )\r\n\r\n\r\n@utils.map_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\ndata_path = glob.glob(\"\/home\/stephen\/Desktop\/MEGA_CORPUS\/COMBINED_CORPUS\/*\", recursive=False)\r\n\r\nseqio_train_list = []\r\n\r\nfor lang in data_path:\r\n dataset_name = lang.split(\"\/\")[-1]\r\n dataset_shapes = None \r\n\r\n TaskRegistry.add(\r\n str(dataset_name),\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_path=lang),\r\n splits=(\"train\", \"test\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\n preprocessors=[\r\n functools.partial(\r\n target_to_key, key_map={\r\n \"targets\": None,\r\n }, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n )\r\n\r\n seqio_train_dataset = seqio.get_mixture_or_task(dataset_name).get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n seqio_train_list.append(seqio_train_dataset)\r\n\r\nlang_name_list = []\r\nfor lang in data_path:\r\n lang_name = lang.split(\"\/\")[-1]\r\n lang_name_list.append(lang_name)\r\n\r\nseqio_mixture = seqio.MixtureRegistry.add(\r\n \"seqio_mixture\",\r\n lang_name_list,\r\n default_rate=0.7\r\n)\r\n\r\nseqio_mixture_dataset = seqio.get_mixture_or_task(\"seqio_mixture\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n\r\nfor _, ex in zip(range(15), seqio_mixture_dataset):\r\n print(ex[\"targets\"].numpy().decode())\r\n```\r\n\r\nwhere the seqio_mixture_dataset is the generator that i wanted to be wrapped in HF dataset. \r\n\r\nalso additionally, could you please tell me how do i set the `default_rate=0.7` args where `seqio_mixture` is defined to be made as a custom option in the HF load_dataset() method,\r\n\r\nmaybe like this: \r\n`seqio_mixture_dataset = datasets.load_dataset(\"seqio_loader\",temperature=0.5)`","I like the idea of having `Dataset.from_iterable(iterable)` in the API. The only problem is that we also want to make this part cachable, which is tricky if `iterable` is a generator. \r\n\r\nSome resources on this issue:\r\n* https:\/\/github.com\/uqfoundation\/dill\/issues\/311\r\n* https:\/\/stackoverflow.com\/questions\/7180212\/why-cant-generators-be-pickled\r\n* https:\/\/github.com\/tonyroberts\/generator_tools - python package for pickling generators; pickles bytecode, so it creates version-specific dumps","For the caching maybe we can have `Dataset.from_generator` as TF and pickle+hash the generator function (not the generator object itself) ?\r\n\r\nAnd then keep `Dataset.from_iterable` fo pickable objects like lists","@lhoestq, @mariosasko do you too have any examples where the dataset is a generator and needs to be wrapped into hf dataset ? ","@lhoestq, following to my previous question ... what possibly could be done in this [link1](https:\/\/github.com\/huggingface\/datasets\/issues\/4417#issuecomment-1146627404) [link2](https:\/\/github.com\/huggingface\/datasets\/issues\/4417#issuecomment-1146627593) case? do you have any ideas? ","@lhoestq +1 for the `Dataset.from_generator` idea.\r\n\r\nHaving thought about it, let's avoid adding `Dataset.from_iterable` to the API since dictionaries are technically iteralbles (\"iterable\" is a broad term in Python), and we already provide `Dataset.from_dict`. And for lists maybe we can add `Dataset.from_list` similar to `pa.Table.from_pylist`. WDYT?\r\n","Hi @StephennFernandes!\r\n\r\nTo fix the issues in the copied code, rename `generate_examples` to` _generate_examples` and add one level of indentation as this is a method of `GeneratorBasedBuilder` and define `_split_generators` as follows (again as a method of `GeneratorBasedBuilder):\r\n```python\r\n def _split_generators(self, dl_manager):\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={},\r\n ),\r\n ]\r\n```\r\n\r\nAnd if you are feeling extra adventurous, you can try to use ArrowWriter to directly create a cache file:\r\n```python\r\nfrom datasets import Dataset\r\nfrom datasets.arrow_writer import ArrowWriter\r\n\r\nwriter = ArrowWriter(path=\"path\/to\/cache_file.arrow\", writer_batch_size=1000)\r\n\r\nwith writer:\r\n for ex in generator:\r\n writer.write(ex) \r\n writer.finalize()\r\n\r\ndset = Dataset.from_file(\"path\/to\/cache_file.arrow\")\r\n```\r\n\r\n","I have a problem which I think is very similar: I would like to \"stream\" data to a HF Array (memory-mapped) Dataset, where the final size of the dataset is unknown, but could be much larger than what fits into memory.\r\nWhat I want to end up with is an Array Dataset which I can open using `Dataset.load_from_disk(dataset_path=\"somename\")` and use e.g. as the training set. \r\n\r\nFor this I would have thought there should be an API which allows me to open\/create the dataset (and define the features etc), then write examples to the dataset, but I could not find a way to do this. \r\n\r\nI tried doing this and it looks like it works, but it feels very hacky and I am not sure if this might fail to update some of the fields in the json files which may turn out to be important:\r\n```\r\nfrom datasets import Dataset, Features, ClassLabel, Sequence, Value\r\nfrom datasets.arrow_writer import ArrowWriter \r\n# 1) define the features\r\nfeatures = Features(dict(\r\n id=Value(dtype=\"string\"),\r\n tokens=Sequence(feature=Value(dtype=\"string\")),\r\n ner_tags=Sequence(feature=ClassLabel(names=['O', 'B-corporation', 'I-corporation', 'B-creative-work', 'I-creative-work', 'B-group', 'I-group', 'B-location', 'I-location', 'B-person', 'I-person', 'B-product', 'I-product'])),\r\n))\r\n# 2) create empty dataset for examples with these features and store to disk\r\nempty = dict(\r\n id = [],\r\n tokens = [],\r\n ner_tags = [],\r\n)\r\nds = Dataset.from_dict(empty, features=features)\r\nds.save_to_disk(dataset_path=\"debug_ds1\")\r\n\r\n# 3) directly write all the examples to the arrow dataset \r\nwith ArrowWriter(path=\"debug_ds1\/dataset.arrow\") as writer: \r\n writer.write(dict(id=0, tokens=[\"a\", \"b\"], ner_tags=[0, 0])) \r\n writer.write(dict(id=1, tokens=[\"x\", \"y\"], ner_tags=[1, 0])) \r\n writer.finalize() \r\n \r\nds2 = Dataset.load_from_disk(dataset_path=\"debug_ds1\")\r\nlen(ds2)\r\n```\r\nIs there a cleaner\/proper way to do this?\r\n\r\nI like the sound of `Dataset.from_iterable` or `Dataset.from_generator` (should not from iterable be able to handle from generator too as all generators are iterables?) but how would I define the features for me examples there? ","Hi @johann-petrak! You can pass the features directly to ArrowWriter's initializer like so `ArrowWriter(..., features=features)`.\r\n\r\nAnd the reason why I prefer `Dataset.from_generator` over `Dataset.from_iterable` is mentioned in one of my previous comments.","@mariosasko so at the moment we still have to create a fake `Dataset` first and then use `ArrowWriter` to write an actual dataset? I'm using the latest version of `datasets` on pypi but my final file is always empty. Is there anything wrong with the code below?\r\n\r\n```python\r\n total = 0\r\n with ArrowWriter(path=str(final_data_path), features=features) as writer:\r\n for batch in loader:\r\n for traj in batch:\r\n for generator in question_generators:\r\n for xi in generator(traj):\r\n # print(f\"Question: {xi.question}, answer: {xi.answer}\")\r\n total += 1\r\n writer.write(\r\n {\r\n \"id\": f\"qa_{total}\",\r\n \"question\": xi.question,\r\n \"answer\": xi.answer,\r\n }\r\n )\r\n writer.finalize()\r\n print(f\"Total #questions = {total}\") # this prints 402\r\n```","This works for me if I then (actually I also close the writer: `writer.close()`) open the Arrow file as a dataset using `ds=Dataset.from_file(final_data_path)` then `ds.save_to_disk(somedir)`. The Dataset created that way contains the expected examples.","Oh thanks. That did the trick I believe. Shouldn't ArrowWriter have a context manager that does these operations?","You can just use `Dataset.from_file` to get your dataset, no need to do an extra `save_to_disk` somewhere else ;)","I was thinking that `save_to_disk` is necessary when one wants to re-use that dataset as a proper HF dataset later, no?\r\nAt least what I wanted to achieve is create a dataset that can be opened like any other local or remote dataset. ","`save_to_disk`\/`load_from_disk` is indeed more general, e.g. it supports datasets that consist in several files, and saves some extra info in a dataset_info.json file (description, citation, split sizes, etc.)\r\n\r\nIf you have one single file it's fine to simply do `.from_file()`"],"created_at":1653841707000,"updated_at":1663339459000,"closed_at":1663339459000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\r\n\r\n_No response_\r\n\r\n### Description\r\n\r\nHey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.\r\n\r\nThe generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset.\r\n\r\nThe code looks like this:\r\n\r\n\r\n```\r\nfor ex in seqio_data:\r\nprint(ex[\u201ctext\u201d])\r\n```\r\n\r\nI need to convert the seqio_data (generator) into huggingface dataset.\r\n\r\n\r\nthe complete seqio code goes here:\r\n```\r\nimport functools\r\n\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_dataset\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_params=None):\r\n dataset = load_dataset(**dataset_params)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_params=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)\r\n )\r\n\r\n\r\n@utils.map_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\n\r\n\r\ndataset_name = 'oscar-corpus\/OSCAR-2109'\r\nsubset= 'mr'\r\ndataset_params = {\"path\": dataset_name, \"language\":subset, \"use_auth_token\":True}\r\ndataset_shapes = None\r\n\r\nTaskRegistry.add(\r\n \"oscar_marathi_corpus\",\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),\r\n splits=(\"train\", \"validation\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\npreprocessors=[\r\nfunctools.partial(\r\ntarget_to_key, key_map={\r\n\"targets\": None,\r\n}, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n)\r\n\r\ndataset = seqio.get_mixture_or_task(\"oscar_marathi_corpus\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42\r\n)\r\nfor _, ex in zip(range(5), dataset):\r\n print(ex['targets'].numpy().decode())\r\n```\r\n\r\n\r\n### Owner\r\n\r\n_No response_","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4417\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4417\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4416","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4416\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4416\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4416\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4416","id":1251875763,"node_id":"PR_kwDODunzps44o7sF","number":4416,"title":"Add LCCC dataset","user":{"login":"silverriver","id":2529049,"node_id":"MDQ6VXNlcjI1MjkwNDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2529049?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/silverriver","html_url":"https:\/\/github.com\/silverriver","followers_url":"https:\/\/api.github.com\/users\/silverriver\/followers","following_url":"https:\/\/api.github.com\/users\/silverriver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/silverriver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/silverriver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/silverriver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/silverriver\/orgs","repos_url":"https:\/\/api.github.com\/users\/silverriver\/repos","events_url":"https:\/\/api.github.com\/users\/silverriver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/silverriver\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thank you very much for your help @albertvillanova .\r\n\r\nI think I have fixed all the comments.\r\n\r\nPlease let me know if this PR need further modification ;)","@albertvillanova Thank you very much for your kind help.\r\nThese suggestions make the code looks more pythonic.\r\n\r\nI have commited these changes","Hi ! The dataset seems to be a duplicate of https:\/\/huggingface.co\/datasets\/silver\/lccc - next time no need to add it on github if it's already available on huggingface.co ;)","> Hi ! The dataset seems to be a duplicate of https:\/\/huggingface.co\/datasets\/silver\/lccc - next time no need to add it on github if it's already available on huggingface.co ;)\r\n\r\nOK, sorry for the inconvenience. I have closed another two PRs since these datasets are already available on huggingface.co","It's fine, thanks @silverriver for adding these datasets !"],"created_at":1653827239000,"updated_at":1655288939000,"closed_at":1654161226000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Hi, I am trying to add a new dataset lccc.\r\nAll tests are passed.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4416\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4416\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4416","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4416","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4416.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4416.patch","merged_at":1654161226000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4415","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4415\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4415\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4415\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4415","id":1251002981,"node_id":"PR_kwDODunzps44mIJk","number":4415,"title":"Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653671022000,"updated_at":1654605745000,"closed_at":1654605232000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error. \r\n\r\nTODO:\r\n~~- [ ] handle token + `{Audio, Image}.embed_storage`~~\r\n- [x] tests","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4415\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4415\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4415","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4415","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4415.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4415.patch","merged_at":1654605232000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4414","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4414\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4414\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4414\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4414","id":1250546888,"node_id":"PR_kwDODunzps44klhY","number":4414,"title":"Rename DatasetBuilder config_name","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653643682000,"updated_at":1654009641000,"closed_at":1654009131000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that:\r\n- it avoids confusion with the attribute `DatasetBuilder.name`, which is different\r\n- it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name`\r\n\r\nOther simpler possibility could be to rename it to just `config` instead.\r\n\r\nPlease note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library.\r\n\r\nIt would have a major impact to rename it also in:\r\n- load_dataset\r\n- load_dataset_builder: although this could also be assumed as inners...\r\n- in our CLI commands\r\n\r\nBesides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release):\r\n```\r\nload_dataset(dataset, config,...\r\n```\r\ninstead of\r\n```\r\nload_dataset(path, name,...\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4414\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4414\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4414","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4414","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4414.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4414.patch","merged_at":1654009131000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4413","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4413\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4413\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4413\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4413","id":1250259822,"node_id":"I_kwDODunzps5KhXNu","number":4413,"title":"Dataset Viewer issue for ett","user":{"login":"dgcnz","id":24966039,"node_id":"MDQ6VXNlcjI0OTY2MDM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24966039?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dgcnz","html_url":"https:\/\/github.com\/dgcnz","followers_url":"https:\/\/api.github.com\/users\/dgcnz\/followers","following_url":"https:\/\/api.github.com\/users\/dgcnz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dgcnz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dgcnz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dgcnz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dgcnz\/orgs","repos_url":"https:\/\/api.github.com\/users\/dgcnz\/repos","events_url":"https:\/\/api.github.com\/users\/dgcnz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dgcnz\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @dgcnz.\r\n\r\nI have checked that the dataset works fine in streaming mode.\r\n\r\nAdditionally, other datasets containing timestamps are properly rendered by the viewer: https:\/\/huggingface.co\/datasets\/blbooks\r\n\r\nI have tried to force the refresh of the preview, but the endpoint is not responsive: Connection timed out\r\n\r\nCC: @severo ","I've just resent the refresh of the preview to the new endpoint, without success.\r\n\r\nCC: @severo ","Fixed!\r\n\r\nhttps:\/\/huggingface.co\/datasets\/ett\/viewer\/h1\/test\r\n\r\n\"Capture\r\n"],"created_at":1653617555000,"updated_at":1655278246000,"closed_at":1655278246000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\r\n\r\nhttps:\/\/huggingface.co\/datasets\/ett\r\n\r\n### Description\r\n\r\nTimestamp is not JSON serializable. \r\n\r\n```\r\nStatus code: 500\r\nException: Status500Error\r\nMessage: Type is not JSON serializable: Timestamp\r\n```\r\n\r\n### Owner\r\n\r\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4413\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4413\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4412","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4412\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4412\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4412\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4412","id":1249490179,"node_id":"PR_kwDODunzps44hFvq","number":4412,"title":"Skip hidden files\/directories in data files resolution and `iter_files`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","This PR (via new release) broke many transformers tests.\r\n\r\nI will try to post a summary shortly.\r\n\r\ncc: @ydshieh ","So now it can't handle a local path via: `--train_file tests\/deepspeed\/..\/fixtures\/tests_samples\/wmt_en_ro\/train.json` even though it's there. it works just fine if I change the path to not have `..`\r\n\r\nYou can reproduce the original problem with:\r\n\r\n```\r\n$ cd transformers \r\n$ python examples\/pytorch\/translation\/run_translation.py --model_name_or_path t5-small --train_file tests\/fixtures\/tests_samples\/wmt_en_ro\/train.json --validation_file tests\/deepspeed\/..\/fixtures\/tests_samples\/wmt_en_ro\/val.json --output_dir \/tmp\/tmp5o5to4k0 --overwrite_output_dir --max_source_length 32 --max_target_length 32 --val_max_target_length 32 --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps 1 --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none --source_prefix \"translate English to Romanian: \" --fp16 --do_train --num_train_epochs 1 --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3\r\n[...]\r\nTraceback (most recent call last):\r\n File \"examples\/pytorch\/translation\/run_translation.py\", line 656, in \r\n main()\r\n File \"examples\/pytorch\/translation\/run_translation.py\", line 346, in main\r\n raw_datasets = load_dataset(\r\n File \"\/home\/stas\/anaconda3\/envs\/py38-pt111\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1656, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"\/home\/stas\/anaconda3\/envs\/py38-pt111\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1439, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"\/home\/stas\/anaconda3\/envs\/py38-pt111\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1097, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"\/home\/stas\/anaconda3\/envs\/py38-pt111\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 743, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"\/home\/stas\/anaconda3\/envs\/py38-pt111\/lib\/python3.8\/site-packages\/datasets\/data_files.py\", line 588, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"\/home\/stas\/anaconda3\/envs\/py38-pt111\/lib\/python3.8\/site-packages\/datasets\/data_files.py\", line 556, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"\/home\/stas\/anaconda3\/envs\/py38-pt111\/lib\/python3.8\/site-packages\/datasets\/data_files.py\", line 194, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"\/home\/stas\/anaconda3\/envs\/py38-pt111\/lib\/python3.8\/site-packages\/datasets\/data_files.py\", line 144, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '\/mnt\/nvme0\/code\/huggingface\/transformers-master\/tests\/deepspeed\/..\/fixtures\/tests_samples\/wmt_en_ro\/val.json' at \/mnt\/nvme0\/code\/huggingface\/transformers-master\r\n```","will apply a workaround to `transformers` tests here https:\/\/github.com\/huggingface\/transformers\/pull\/17721\r\n","This has been fixed with https:\/\/github.com\/huggingface\/datasets\/pull\/4505, will do a patch release tomorrow for `datasets` ;)","Thank you for the quick fix, @lhoestq "],"created_at":1653567028000,"updated_at":1655313085000,"closed_at":1654088656000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4115 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4412\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4412\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4412","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4412","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4412.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4412.patch","merged_at":1654088656000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4411","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4411\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4411\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4411\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4411","id":1249462390,"node_id":"PR_kwDODunzps44g_yL","number":4411,"title":"Update `_format_columns` in `remove_columns`","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["\ud83e\udd17 This PR closes https:\/\/github.com\/huggingface\/datasets\/issues\/4398","_The documentation is not available anymore as the PR was closed or merged._","Hi! Thanks for reporting and providing a fix. I made a small change to make the fix easier to understand.","Hi, @mariosasko thanks! It makes sense, sorry I'm not that familiar with `datasets` code \ud83d\ude29 ","Sure @albertvillanova I'll do that later today and ping you once done, thanks! :hugs:","Hi again @albertvillanova! Let me know if those tests are fine \ud83e\udd17 ","Hi @alvarobartt,\r\n\r\nI think your tests are failing. I don't know why previously, after your last commit, the CI tests were not triggered. \r\n\r\nIn order to force the re-running of the CI tests, I had to edit your file using the GitHub UI.\r\n\r\nFirst I tried to do it using my terminal, but I don't have push right to your PR branch: I would ask you next time you open a PR, please mark the checkbox \"Allow edits from maintainers\": https:\/\/docs.github.com\/en\/pull-requests\/collaborating-with-pull-requests\/working-with-forks\/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests","Hi @albertvillanova, let me check those again! And regarding that checkbox I thought it was already checked so my bad there \ud83d\ude29 ","@albertvillanova again it seems that the tests were not automatically triggered, but I tested those locally and now they work, as previously those were failing as I created an assertion as `self.assertEqual` over an empty list that was compared as `None` while the value was `[]` so I updated it to be `self.assertListEqual` and changed the comparison value to `[]`.","@lhoestq any idea why the CI is not triggered?","@alvarobartt I have tested locally and the tests continue failing.\r\n\r\nI think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n","You're right @albertvillanova I was indeed running the tests with `datasets==2.2.0` rather than with the branch version, I'll check it again! Sorry for the inconvenience...","> @alvarobartt I have tested locally and the tests continue failing.\r\n> \r\n> I think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n\r\nIn order to have some regressions tests for the fixed scenario, I've manually updated the value of `_format_columns` in the `ArrowDataset` so as to check whether it's updated or not right after calling `remove_columns`, and it does behave as expected, so with the latest version of this branch the reported issue doesn't occur anymore.","Hi again @albertvillanova sorry I was on leave! I'll do that ASAP :hugs:","@albertvillanova, does it make sense to add regression tests for `DatasetDict`? As `DatasetDict` doesn't have the attribute `_format_columns`, when we call `remove_columns` over a `DatasetDict` it removes the columns and updates the attributes of each split which is an `ArrowDataset`.\r\n\r\nSo on, we can either:\r\n- Update first the `_format_columns` attribute of each split and then remove the columns over the `DatasetDict`\r\n- Loop over the splits of `DatasetDict` and remove the columns right after updating `_format_columns` of each `ArrowDataset`.\r\n\r\nI assume that the best regression test is the one implemented (mentioned first above), let me know if there's a better way to do that \ud83d\udc4d\ud83c\udffb ","I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?","> I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?\r\n\r\nHi @lhoestq I can check now!","It worked indeed @lhoestq, thanks for the proposal and the review! \ud83e\udd17 ","Oops, I forgot about `@transmit_format`'s existence. From what I see, we should also use this decorator in `flatten`, `rename_column` and `rename_columns`. \r\n\r\n@alvarobartt Let me know if you'd like to work on this (in a subsequent PR).","Sure @mariosasko I can prepare another PR to add those too, thanks! "],"created_at":1653565206000,"updated_at":1655233537000,"closed_at":1655222516000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns.\r\n\r\nSo on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function.\r\n\r\nHope this helps!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4411\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4411\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4411","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4411","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4411.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4411.patch","merged_at":1655222515000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4410","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4410\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4410\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4410\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4410","id":1249148457,"node_id":"PR_kwDODunzps44f_Td","number":4410,"title":"Remove Google Drive URL in spider dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653545855000,"updated_at":1653547722000,"closed_at":1653547212000,"author_association":"MEMBER","active_lock_reason":null,"body":"The `spider` dataset is distributed under the [CC BY-SA 4.0](https:\/\/creativecommons.org\/licenses\/by-sa\/4.0\/legalcode) license.\r\n\r\nFix #4401.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4410\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4410\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4410","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4410","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4410.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4410.patch","merged_at":1653547212000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4409","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4409\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4409\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4409\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4409","id":1249083179,"node_id":"PR_kwDODunzps44fxiH","number":4409,"title":"Update: add using pcm bytes (#4323)","user":{"login":"YooSungHyun","id":34292279,"node_id":"MDQ6VXNlcjM0MjkyMjc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34292279?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YooSungHyun","html_url":"https:\/\/github.com\/YooSungHyun","followers_url":"https:\/\/api.github.com\/users\/YooSungHyun\/followers","following_url":"https:\/\/api.github.com\/users\/YooSungHyun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YooSungHyun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YooSungHyun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YooSungHyun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YooSungHyun\/orgs","repos_url":"https:\/\/api.github.com\/users\/YooSungHyun\/repos","events_url":"https:\/\/api.github.com\/users\/YooSungHyun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YooSungHyun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Maybe I'm missing something, but what's the reason to read and encode PCM files to WAV in `Audio.encode_example`. Isn't the whole purpose of the decodable types to operate on raw files whenever possible? IMO this PR should only modify `Audio.decode_example` to support PCM files\/bytes decoding.","Because the PCM file is not enough, we also need the `sampling_rate` associated to it. Therefore the two alternatives are either:\r\n- convert to WAV\r\n- add a `sampling_rate` field to the Audio arrow storage (not sure how it would behave for backward compatibility though)","But [`scipy.io.wavfile.read`](https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https:\/\/stackoverflow.com\/questions\/33682490\/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.","How does it get the sampling rate of a PCM file then ? According to [SO](https:\/\/stackoverflow.com\/a\/57027667\/17517845) it's not possible to infer it from the file alone","> Awesome thanks ! Could you also add tests in `tests\/features\/test_audio.py` ?\r\n> \r\n> Maybe add a small pcm file in `tests\/features\/data` and check that everything works as expected in tests cases like `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` for example.\r\n\r\n@lhoestq how can i test test_audio.py? where is \"__main__\" func?\r\ndo you have some example or guideline?","> But [`scipy.io.wavfile.read`](https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https:\/\/stackoverflow.com\/questions\/33682490\/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.\r\n\r\n@mariosasko @lhoestq \r\nthanks for comment!\r\n\r\nFirst of all, \"PCM file\" can not read alone to any audio library.\r\n\"PCM file\" has not any audio META information header. (it just purely audio byte data. therefore, we don't have to encoding and decoding)\r\nbut, \"PCM file\" is audio extension, so we can use `datasets.Audio`\r\n\r\nif you want to read \"PCM file\" to audio file likely, it have to needs additional parameter. (channel, sampling_rate, else....)\r\nbut, in many situation, we only know sampling_rate for PCM\r\n\r\nand, if we want to use `datasets.Audio` for \"PCM file\", we must process encode_example.\r\ntherefore, i have to use sampling_rate for encoding for making wav-style byte. (we only know sampling_rate)\r\n\r\nIn my source code, I don't compare sampling rate(`datasets.Audio's self.sampling_rate` and `read pcm sampling_rate(value[\"sampling_rate\"])`) and checking mono\r\n@mariosasko ! do you want to process resampling and making mono? then i can modify my source\r\n","There is no \"main\" function in test scripts :) To run a test script you must use the `pytest` command:\r\n```\r\npytest tests\/features\/test_audio.py\r\n```\r\n\r\nto run only one function you can also do\r\n```\r\npytest tests\/features\/test_audio.py::test_audio_feature_type_to_arrow\r\n```\r\nfor example","@lhoestq\r\nmaybe, if i write test code, i have to commit test_audio.py and send pr?\r\nbecause, we need to keep `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` method after my pr merged?","You can add your tests in this PR with the other changes you did","@lhoestq \r\ntest complete & commit my test_audio.py\r\n\r\nAND, some change in my code.\r\n\r\naudio.py\r\ni think \"sampling_rate\" is already Audio object initial variable. so, we don`t have to use input parameter.\r\n\r\ntest_audio.py\r\nwe can check \"PCM\" file to path (exactly, extenstion)\r\nso, test case has to know `path`. if only have `bytes`, we don`t know that is \"PCM\" or not","@lhoestq\r\nand, why circleci raised exception?\r\nmaybe, [repo](https:\/\/huggingface.co\/api\/datasets\/lhoestq\/_dummy?full=true) url is not found!\r\nPLZ, CHK!","@lhoestq\r\nhello????","@lhoestq \r\ntest_audio.py\r\nif we don`t use path in pcm, test-case need to be changed\r\nso, we check path just None","i'm merge branch already and `multiprocess` in `setup.py` but circleci error only win version\r\n![image](https:\/\/user-images.githubusercontent.com\/34292279\/175461714-c7d2e741-3b7b-40a3-bba9-13ce2af0055c.png)\r\nhow can i fixed it?","@lhoestq thx for comment!\r\ntest_audio.py test complete. it runs sucessfully\r\nand, self.get(\"sampling_rate\") -> value.get(\"sampling_rate\") changed\r\n\r\nand, some comment is not agreed to me, plz check my sub comment!","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653539196000,"updated_at":1657200449000,"closed_at":1657199769000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"first of all, please look #4323\r\n\r\nwhy i can not use {\"path\",\"array\",\"sampling_rate\"}\r\nbecause sf.write(format=\"wav\") and sf.read(BytesIO) is changed my pcm data value\r\nmaybe, i think wav got header but, pcm is not.\r\nand variable naming, pcm data is \"byte\" type. so, \"array\" name is not fair i think\r\n\r\nso, i use scipy lib and numpy (that is huggingface dependency)\r\nand refer to @lhoestq answered,\r\n1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte)\r\n2. byte converting using fairseq style pcm audio read [FileAudioDataset](https:\/\/github.com\/facebookresearch\/fairseq\/blob\/main\/fairseq\/data\/audio\/raw_audio_dataset.py)\r\n4. decode -> read wavfile.read\r\n\r\nthat way is not screw up my pcm byte to float data, and another audio type(wav) safety\r\n\r\nplease check!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4409\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4409\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4409","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4409","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4409.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4409.patch","merged_at":1657199768000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4408","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4408\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4408\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4408\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4408","id":1248687574,"node_id":"PR_kwDODunzps44ecNI","number":4408,"title":"Update imagenet gate","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653510739000,"updated_at":1653511511000,"closed_at":1653511007000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4408\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4408\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4408","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4408","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4408.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4408.patch","merged_at":1653511007000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4407","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4407\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4407\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4407\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4407","id":1248671778,"node_id":"I_kwDODunzps5KbTgi","number":4407,"title":"Dataset Viewer issue for conll2012_ontonotesv5","user":{"login":"jiangwy99","id":39762734,"node_id":"MDQ6VXNlcjM5NzYyNzM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39762734?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jiangwy99","html_url":"https:\/\/github.com\/jiangwy99","followers_url":"https:\/\/api.github.com\/users\/jiangwy99\/followers","following_url":"https:\/\/api.github.com\/users\/jiangwy99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jiangwy99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jiangwy99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jiangwy99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jiangwy99\/orgs","repos_url":"https:\/\/api.github.com\/users\/jiangwy99\/repos","events_url":"https:\/\/api.github.com\/users\/jiangwy99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jiangwy99\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @jiangwy99.\r\n\r\nI guess this could be addressed only once we fix our issue with irresponsive backend endpoint.\r\n\r\nCC: @severo ","I've just sent the forcing of the refresh of the preview to the new endpoint.","Fixed, thanks for the patience. The issue was the amount of RAM allowed to extract the first rows of the dataset was not sufficient."],"created_at":1653509913000,"updated_at":1654627156000,"closed_at":1654627156000,"author_association":"NONE","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/conll2012_ontonotesv5\n\n### Description\n\nDataset viewer outage.\n\n### Owner\n\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4407\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4407\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4406","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4406\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4406\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4406\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4406","id":1248626622,"node_id":"PR_kwDODunzps44ePLU","number":4406,"title":"Improve language tag for PIAF dataset","user":{"login":"lbourdois","id":58078086,"node_id":"MDQ6VXNlcjU4MDc4MDg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/58078086?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lbourdois","html_url":"https:\/\/github.com\/lbourdois","followers_url":"https:\/\/api.github.com\/users\/lbourdois\/followers","following_url":"https:\/\/api.github.com\/users\/lbourdois\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lbourdois\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lbourdois\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lbourdois\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lbourdois\/orgs","repos_url":"https:\/\/api.github.com\/users\/lbourdois\/repos","events_url":"https:\/\/api.github.com\/users\/lbourdois\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lbourdois\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1653507715000,"updated_at":1653663083000,"closed_at":1653663083000,"author_association":"NONE","active_lock_reason":null,"body":"Hi, \r\n\r\nAs pointed out by @lhoestq in this discussion (https:\/\/huggingface.co\/datasets\/asi\/wikitext_fr\/discussions\/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub.\r\n\r\nThis modification should allow better referencing since only the xx language tags are currently taken into account and not the xx-xx.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4406\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4406\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4406","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4406","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4406.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4406.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4405","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4405\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4405\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4405\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4405","id":1248574087,"node_id":"I_kwDODunzps5Ka7qH","number":4405,"title":"[TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2","user":{"login":"jiangwy99","id":39762734,"node_id":"MDQ6VXNlcjM5NzYyNzM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39762734?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jiangwy99","html_url":"https:\/\/github.com\/jiangwy99","followers_url":"https:\/\/api.github.com\/users\/jiangwy99\/followers","following_url":"https:\/\/api.github.com\/users\/jiangwy99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jiangwy99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jiangwy99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jiangwy99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jiangwy99\/orgs","repos_url":"https:\/\/api.github.com\/users\/jiangwy99\/repos","events_url":"https:\/\/api.github.com\/users\/jiangwy99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jiangwy99\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["And if the problem is that the way I am to construct the {Entity Type: list of spans} makes entity types without any spans hard to handle, is there a better way to meet the demand? Although I have verified that to make entity types without any spans to behave like `entity_chunk[label] = [[\"\"]]` can perform normally, I still wonder if there is a more elegant way?"],"created_at":1653505003000,"updated_at":1654612040000,"closed_at":1654612040000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI am trying to process the [conll2012_ontonotesv5](https:\/\/huggingface.co\/datasets\/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport os\r\nfrom typing import (\r\n List,\r\n Dict,\r\n)\r\nfrom collections import (\r\n defaultdict,\r\n)\r\nfrom dataclasses import (\r\n dataclass,\r\n)\r\nfrom datasets import (\r\n load_dataset,\r\n)\r\n\r\n\r\n@dataclass\r\nclass ConllConverter:\r\n\r\n path: str\r\n name: str\r\n cache_dir: str\r\n\r\n def __post_init__(\r\n self,\r\n ):\r\n self.dataset = load_dataset(\r\n path=self.path,\r\n name=self.name,\r\n cache_dir=self.cache_dir,\r\n )\r\n\r\n def convert(\r\n self,\r\n ):\r\n\r\n class_label = self.dataset[\"train\"].features[\"sentences\"][0][\"named_entities\"].feature\r\n # label_set = list(set([\r\n # label.split(\"-\")[1] if label != \"O\" else label for label in class_label.names\r\n # ]))\r\n\r\n def prepare_chunk(token, entity):\r\n assert len(token) == len(entity)\r\n # Sequence length\r\n length = len(token)\r\n # Variable used\r\n entity_chunk = defaultdict(list)\r\n idx = flag = 0\r\n # While loop\r\n while idx < length:\r\n if entity[idx] == \"O\":\r\n flag += 1\r\n idx += 1\r\n else:\r\n iob_tp, lab_tp = entity[idx].split(\"-\")\r\n assert iob_tp == \"B\"\r\n idx += 1\r\n while idx < length and entity[idx].startswith(\"I-\"):\r\n idx += 1\r\n entity_chunk[lab_tp].append(token[flag: idx])\r\n flag = idx\r\n entity_chunk = dict(entity_chunk)\r\n # for label in label_set:\r\n # if label != \"O\" and label not in entity_chunk.keys():\r\n # entity_chunk[label] = None\r\n return entity_chunk\r\n\r\n def prepare_features(\r\n batch: Dict[str, List],\r\n ) -> Dict[str, List]:\r\n sentence = [\r\n sent for doc_sent in batch[\"sentences\"] for sent in doc_sent\r\n ]\r\n feature = {\r\n \"sentence\": list(),\r\n }\r\n for sent in sentence:\r\n token = sent[\"words\"]\r\n entity = class_label.int2str(sent[\"named_entities\"])\r\n entity_chunk = prepare_chunk(token, entity)\r\n sent_feat = {\r\n \"token\": token,\r\n \"entity\": entity,\r\n \"entity_chunk\": entity_chunk,\r\n }\r\n feature[\"sentence\"].append(sent_feat)\r\n\r\n return feature\r\n\r\n column_names = self.dataset.column_names[\"train\"]\r\n dataset = self.dataset.map(\r\n function=prepare_features,\r\n with_indices=False,\r\n batched=True,\r\n batch_size=3,\r\n remove_columns=column_names,\r\n num_proc=1,\r\n )\r\n dataset.save_to_disk(\r\n dataset_dict_path=os.path.join(\"data\", self.path, self.name)\r\n )\r\n\r\n\r\nif __name__ == \"__main__\":\r\n converter = ConllConverter(\r\n path=\"conll2012_ontonotesv5\",\r\n name=\"english_v4\",\r\n cache_dir=\"cache\",\r\n )\r\n converter.convert()\r\n\r\n```\r\n\r\n## Expected results\r\nI want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format.\r\n\r\n## Actual results\r\n
\r\nTraceback<\/summary>\r\n\r\n```python\r\nTraceback (most recent call last): | 0\/81 [00:00\r\n arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]\r\n File \"\/home2\/jiangwangyi\/miniconda3\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1675, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"\/home2\/jiangwangyi\/miniconda3\/lib\/python3.9\/site-packages\/datasets\/table.py\", line 1844, in cast_array_to_feature\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\nTypeError: Couldn't cast array of type\r\nstruct>, DATE: list>, EVENT: list>, FAC: list>, GPE: list>, LANGUAGE: list>, LAW: list>, LOC: list>, MONEY: list>, NORP: list>, ORDINAL: list>, ORG: list>, PERCENT: list>, PERSON: list>, QUANTITY: list>, TIME: list>, WORK_OF_ART: list>>\r\nto\r\n{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\/home2\/jiangwangyi\/workspace\/work\/Entity\/dataconverter.py\", line 110, in \r\n converter.convert()\r\n File \"\/home2\/jiangwangyi\/workspace\/work\/Entity\/dataconverter.py\", line 91, in convert\r\n dataset = self.dataset.map(\r\n File \"\/home2\/jiangwangyi\/miniconda3\/lib\/python3.9\/site-packages\/datasets\/dataset_dict.py\", line 770, in map\r\n {\r\n File \"\/home2\/jiangwangyi\/miniconda3\/lib\/python3.9\/site-packages\/datasets\/dataset_dict.py\", line 771, in \r\n k: dataset.map(\r\n File \"\/home2\/jiangwangyi\/miniconda3\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 2459, in map\r\n transformed_shards[index] = async_result.get()\r\n File \"\/home2\/jiangwangyi\/miniconda3\/lib\/python3.9\/site-packages\/multiprocess\/pool.py\", line 771, in get\r\n raise self._value\r\nTypeError: Couldn't cast array of type\r\nstruct>, DATE: list>, EVENT: list>, FAC: list>, GPE: list>, LANGUAGE: list>, LAW: list>, LOC: list>, MONEY: list>, NORP: list>, ORDINAL: list>, ORG: list>, PERCENT: list>, PERSON: list>, QUANTITY: list>, TIME: list>, WORK_OF_ART: list>>\r\nto\r\n{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}\r\n```\r\n\r\n<\/details>\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.2.2\r\n- Platform: Ubuntu 18.04\r\n- Python version: 3.9.7\r\n- PyArrow version: 7.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4405\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4405\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4404","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4404\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4404\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4404\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4404","id":1248572899,"node_id":"I_kwDODunzps5Ka7Xj","number":4404,"title":"Dataset should have a `.name` field","user":{"login":"f4hy","id":36440,"node_id":"MDQ6VXNlcjM2NDQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36440?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/f4hy","html_url":"https:\/\/github.com\/f4hy","followers_url":"https:\/\/api.github.com\/users\/f4hy\/followers","following_url":"https:\/\/api.github.com\/users\/f4hy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/f4hy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/f4hy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/f4hy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/f4hy\/orgs","repos_url":"https:\/\/api.github.com\/users\/f4hy\/repos","events_url":"https:\/\/api.github.com\/users\/f4hy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/f4hy\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! You can already use `dset.builder_name` and `dset.config_name` for that purpose. And when it comes to versioning, it's better to use `dset._fingerprint` than the `version` attribute as the former represents a deterministic hash that encodes all the mutable ops executed on a dataset, and the latter stays the same unless it's manually updated after each op.","@mariosasko Can we make ._fingerprint not private? seems a critical component for tracking how a model was generated to ensure reproducibility."],"created_at":1653504968000,"updated_at":1663081770000,"closed_at":1655376473000,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nIf building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}`\r\n\r\nWithout some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name\/id of the dataset being used. \r\n\r\n**Describe the solution you'd like**\r\nThe DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name`\r\n\r\n**Describe alternatives you've considered**\r\nFor my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field. \r\n\r\n**Additional context**\r\nMy guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4404\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4404\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4403","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4403\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4403\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4403\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4403","id":1248390134,"node_id":"PR_kwDODunzps44dcpl","number":4403,"title":"Uncomment logging deactivation for ArrowBasedBuilder","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653497175000,"updated_at":1653986016000,"closed_at":1653985502000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4403\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4403\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4403","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4403","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4403.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4403.patch","merged_at":1653985502000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4402","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4402\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4402\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4402\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4402","id":1248078067,"node_id":"PR_kwDODunzps44cdR5","number":4402,"title":"Skip identical files in `push_to_hub` instead of overwriting","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653484371000,"updated_at":1653491796000,"closed_at":1653491283000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side\/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload.\r\n\r\nTo be able to check if an upload can be resumed, this PR modifies the shard naming scheme from:\r\n```\r\ndata\/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].parquet\r\n```\r\nto:\r\n```\r\ndata\/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]-.parquet\r\n```\r\ncc @LysandreJik ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4402\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4402\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4402","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4402","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4402.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4402.patch","merged_at":1653491283000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4401","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4401\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4401\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4401\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4401","id":1247695921,"node_id":"I_kwDODunzps5KXlQx","number":4401,"title":"\"NonMatchingChecksumError\" when importing 'spider' dataset","user":{"login":"OmarAlaaeldein","id":81417777,"node_id":"MDQ6VXNlcjgxNDE3Nzc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/81417777?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/OmarAlaaeldein","html_url":"https:\/\/github.com\/OmarAlaaeldein","followers_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/followers","following_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/orgs","repos_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/repos","events_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/OmarAlaaeldein\/received_events","type":"User","site_admin":false},"labels":[{"id":4069435429,"node_id":"LA_kwDODunzps7yjqgl","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/hosted-on-google-drive","name":"hosted-on-google-drive","color":"8B51EF","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @OmarAlaaeldein.\r\n\r\nDatasets hosted at Google Drive give problems quite often due to a change in their service:\r\n- #3786 \r\n\r\nRelated to:\r\n- #3906\r\n\r\nI'm having a look.","We have made a Pull Request to replace the Google Drive URL. This fix will be accessible in our next `datasets` library release.\r\n\r\nIn the meantime, once the PR merged into master, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https:\/\/github.com\/huggingface\/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"],"created_at":1653464707000,"updated_at":1653547212000,"closed_at":1653547212000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen importing 'spider' dataset [https:\/\/huggingface.co\/datasets\/spider] an error occurs\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('spider')\r\n```\r\n\r\n## Expected results\r\nDataset object\r\n\r\n## Actual results\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/drive.google.com\/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']\r\n\r\n## Environment info\r\n- `datasets` version: 2.2.2\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4401\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4401\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4400","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4400\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4400\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4400\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4400","id":1247404237,"node_id":"I_kwDODunzps5KWeDN","number":4400,"title":"load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py.","user":{"login":"cailun01","id":20658907,"node_id":"MDQ6VXNlcjIwNjU4OTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20658907?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cailun01","html_url":"https:\/\/github.com\/cailun01","followers_url":"https:\/\/api.github.com\/users\/cailun01\/followers","following_url":"https:\/\/api.github.com\/users\/cailun01\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cailun01\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cailun01\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cailun01\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cailun01\/orgs","repos_url":"https:\/\/api.github.com\/users\/cailun01\/repos","events_url":"https:\/\/api.github.com\/users\/cailun01\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cailun01\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1653448244000,"updated_at":1653449196000,"closed_at":1653449196000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nCould not reach wikitext-2-raw-v1.py\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"wikitext-2-raw-v1\")\r\n```\r\n\r\n## Expected results\r\nDownload `wikitext-2-raw-v1` dataset successfully.\r\n\r\n## Actual results\r\n```\r\n File \"load_datasets.py\", line 13, in \r\n load_dataset(\"wikitext-2-raw-v1\")\r\n File \"\/root\/miniconda3\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 1715, in load_dataset\r\n **config_kwargs,\r\n File \"\/root\/miniconda3\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 1536, in load_dataset_builder\r\n data_files=data_files,\r\n File \"\/root\/miniconda3\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 1282, in dataset_module_factory\r\n raise e1 from None\r\n File \"\/root\/miniconda3\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 1224, in dataset_module_factory\r\n dynamic_modules_path=dynamic_modules_path,\r\n File \"\/root\/miniconda3\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 559, in get_module\r\n local_path = self.download_loading_script(revision)\r\n File \"\/root\/miniconda3\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 539, in download_loading_script\r\n return cached_path(file_path, download_config=download_config)\r\n File \"\/root\/miniconda3\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 246, in cached_path\r\n download_desc=download_config.download_desc,\r\n File \"\/root\/miniconda3\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 582, in get_from_cache\r\n raise ConnectionError(f\"Couldn't reach {url} ({repr(head_error)})\")\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/2.2.2\/datasets\/wikitext-2-raw-v1\/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError(\"HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)\",),))\r\n```\r\nI tried to download wikitext-2-raw-v1.py by chrome and got:\r\n![image](https:\/\/user-images.githubusercontent.com\/20658907\/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png)\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.2.2\r\n- Platform: CentOS 7\r\n- Python version: 3.6\r\n- PyArrow version: 3.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4400\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4400\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4399","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4399\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4399\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4399\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4399","id":1246948299,"node_id":"I_kwDODunzps5KUuvL","number":4399,"title":"LocalDatasetModuleFactoryWithoutScript extracts invalid builder name","user":{"login":"apohllo","id":40543,"node_id":"MDQ6VXNlcjQwNTQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40543?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/apohllo","html_url":"https:\/\/github.com\/apohllo","followers_url":"https:\/\/api.github.com\/users\/apohllo\/followers","following_url":"https:\/\/api.github.com\/users\/apohllo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/apohllo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/apohllo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/apohllo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/apohllo\/orgs","repos_url":"https:\/\/api.github.com\/users\/apohllo\/repos","events_url":"https:\/\/api.github.com\/users\/apohllo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/apohllo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ok, so\r\n```\r\nos.path.basename(\"\/home\/user\/\")\r\n```\r\ngives `''` while \r\n```\r\nos.path.basename(\"\/home\/user\")\r\n```\r\ngives `user`. \r\nThe code should check if the last char is a slash.\r\n","The fix is:\r\n```\r\n\"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"\/\" else self.path)\r\n```","I came through the same issue , just removing the last slash in the dataset path fixed it for me, may be this repo moderators could accept this as an accepted answer atleast if this could not be integrated\r\n\r\n> The fix is:\r\n> \r\n> ```\r\n> \"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"\/\" else self.path)\r\n> ```\r\n\r\n@apohllo consider making a pull request on this \r\n\r\nThanks for the amazing contributions from huggingface people !!\r\n","@apohllo Would you be interested in submitting a PR with the fix?","@mariosasko here we go:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/pull\/4967\r\n\r\nTBH I haven't tested it yet, but should work, since this is a basic change."],"created_at":1653415381000,"updated_at":1662996643000,"closed_at":1662996643000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nTrying to load a local dataset raises an error indicating that the config builder has to have a name.\r\nNo error should be reported, since the call is completly valid.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nload_dataset(\".\/data\/some-dataset\/\", name=\"some-name\")\r\n```\r\n\r\n## Expected results\r\nThe dataset should be loaded.\r\n\r\n## Actual results\r\n```\r\nTraceback (most recent call last):\r\n File \"train_lquad.py\", line 19, in \r\n load(tokenize_target_function, tokenize_target_function, {}, tokenizer)\r\n File \"train_lquad.py\", line 14, in load\r\n dataset = load_dataset(\".\/data\/lquad\/\", name=\"lquad\")\r\n File \"\/net\/pr2\/scratch\/people\/plgapohl\/python-3.8.6\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1708, in load_dataset \r\n builder_instance = load_dataset_builder(\r\n File \"\/net\/pr2\/scratch\/people\/plgapohl\/python-3.8.6\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1560, in load_dataset_builder \r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"\/net\/pr2\/scratch\/people\/plgapohl\/python-3.8.6\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 269, in __init__ \r\n self.config, self.config_id = self._create_builder_config(\r\n File \"\/net\/pr2\/scratch\/people\/plgapohl\/python-3.8.6\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 403, in _create_builder_config \r\n raise ValueError(f\"BuilderConfig must have a name, got {builder_config.name}\")\r\nValueError: BuilderConfig must have a name, got\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 2.2.2\r\n- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5\r\n- Python version: 3.8.6\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n\r\nThe error is probably in line 795 in load.py:\r\n\r\n```\r\n builder_kwargs = { \r\n \"hash\": hash,\r\n \"data_files\": data_files,\r\n \"name\": os.path.basename(self.path),\r\n \"base_path\": self.path,\r\n **builder_kwargs,\r\n }\r\n```\r\n\r\n`os.path.basename` for a directory returns an empty string, rather than the name of the directory.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4399\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4399\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4398","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4398\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4398\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4398\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4398","id":1246666749,"node_id":"I_kwDODunzps5KTp_9","number":4398,"title":"Calling `cast_column`\/`remove_columns` and a sequence of `map` operations ends up making `faiss` fail with `ValueError`","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It works if we either remove the `ds = ds.cast_column(\"id\", Value(\"int32\"))` line from the code above, or if instead calling `ds.remove_columns()` we remove the columns inside each mapping as `ds.map(..., remove_columns=[...])` instead of right after the mapping.\r\n\r\nBoth of those solutions seem to fix the issue, so the root cause of it may be around that. Sorry I cannot provide you more insights, in case I get to fix it I'll submit a PR, in the meanwhile the code that I'm using as a workaround is the following:\r\n\r\n```python\r\nfrom transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\nimport torch\r\n\r\ntorch.set_grad_enabled(False)\r\nctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\nctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n\r\nfrom datasets import load_dataset, Value\r\n\r\nds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\nds = ds.cast_column(\"id\", Value(\"int32\"))\r\nds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n\r\ndef generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n\r\nds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\nds.add_faiss_index(column=\"embeddings\")\r\n```","FYI the main reason I want to use `dataset.remove_columns` rather than the function inside `dataset.map` is because according to the \ud83e\udd17 Datasets documentation, it's faster.\r\n\r\n\"\ud83e\udd17 Datasets also has a [Dataset.remove_columns()](https:\/\/huggingface.co\/docs\/datasets\/v2.2.1\/en\/package_reference\/main_classes#datasets.Dataset.remove_columns) method that is functionally identical, but faster, because it doesn\u2019t copy the data of the remaining columns.\"\r\n\r\nMore information at https:\/\/huggingface.co\/docs\/datasets\/process#map","Here I'm presenting all the scenarios so that you can further investigate the issue:\r\n\r\n- \u2705 `cast_column` -> `map` with `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- \u274c `cast_column` -> `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- \u274c `cast_column` -> `map` with `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- \u2705 `cast_column` -> `map` -> `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- \u2705 `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```","So on, I've created #4411 so as to fix the bug with `remove_columns` under certain conditions before `add_faiss_index`, which means that the scenarios not working above are already working fine."],"created_at":1653403294000,"updated_at":1655222516000,"closed_at":1655222516000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue.\r\n\r\n## Describe the bug\r\n\r\nCalling a certain combination of operations over a \ud83e\udd17 `Dataset` and then trying to calculate the `faiss` index with `.add_faiss_index` ends up throwing an exception while trying to set the format back of a previously removed column. But this just happens over certain conditions... I'll present some scenarios below!\r\n\r\n## Steps to reproduce the bug\r\n\r\nAssuming the following dataset named `sample.csv` with some IMDb data:\r\n\r\n```csv\r\nid,title,summary\r\n1877830,\"The Batman\",\"When a sadistic serial killer begins murdering key political figures in Gotham, Batman is forced to investigate the city's hidden corruption and question his family's involvement.\"\r\n9419884,\"Doctor Strange in the Multiverse of Madness\",\"Doctor Strange teams up with a mysterious teenage girl from his dreams who can travel across multiverses, to battle multiple threats, including other-universe versions of himself, which threaten to wipe out millions across the multiverse. They seek help from Wanda the Scarlet Witch, Wong and others.\"\r\n11138512,\"The Northman\",\"From visionary director Robert Eggers comes The Northman, an action-filled epic that follows a young Viking prince on his quest to avenge his father's murder.\" \r\n1745960,\"Top Gun: Maverick\",\"After more than thirty years of service as one of the Navy's top aviators, Pete Mitchell is where he belongs, pushing the envelope as a courageous test pilot and dodging the advancement in rank that would ground him.\"\r\n```\r\n\r\nWe'll be able to reproduce the bug using the following piece of code:\r\n\r\n```python\r\n# Sample code to reproduce the bug\r\nfrom transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\nimport torch\r\n\r\ntorch.set_grad_enabled(False)\r\nctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\nctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n\r\nfrom datasets import load_dataset, Value\r\n\r\nds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\nds = ds.cast_column(\"id\", Value(\"int32\")) # from `int64` to `int32`\r\nds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\nds = ds.remove_columns([\"title\", \"summary\"])\r\n\r\ndef generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n\r\nds = ds.map(generate_embeddings)\r\nds = ds.remove_columns(\"inputs\")\r\nds.add_faiss_index(column=\"embeddings\") # It fails here!\r\n```\r\n\r\nThe code above is an adaptation of https:\/\/huggingface.co\/docs\/datasets\/faiss_es, for the sake of presenting the bug with a simple example.\r\n\r\n## Expected results\r\n\r\nIdeally, the `faiss` index should be calculated over the \ud83e\udd17 `Dataset` and no exception should be triggered.\r\n\r\n## Actual results\r\n\r\nBut what happens instead is that a `ValueError: Columns ['inputs'] not in the dataset. Current columns in the dataset: ['id', 'embeddings']`, which makes no sense as that column has been previously dropped.\r\n\r\n## Environment info\r\n\r\n\r\n- `datasets` version: 2.2.2\r\n- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.31\r\n- Python version: 3.9.5\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4398\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4398\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4397","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4397\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4397\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4397\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4397","id":1246597632,"node_id":"PR_kwDODunzps44XcG3","number":4397,"title":"Fix dependency on dill version","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653400463000,"updated_at":1653487372000,"closed_at":1653486848000,"author_association":"MEMBER","active_lock_reason":null,"body":"We had to make a hotfix by pinning dill:\r\n- #4380\r\n\r\nbecause from version 0.3.5, our custom `save_function` pickling function was raising an exception:\r\n- #4379\r\n\r\nThis PR fixes this by implementing our custom `save_function` depending on the version of dill.\r\n\r\nCC: @anivegesana \r\n\r\nThis PR needs first being merged:\r\n- [x] #4384\r\n - so that a circular import is fixed\r\n\r\nIt is also convenient to merge first:\r\n- [x] #4385","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4397\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4397\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4397","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4397","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4397.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4397.patch","merged_at":1653486848000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4396","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4396\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4396\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4396\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4396","id":1245479399,"node_id":"PR_kwDODunzps44T0Di","number":4396,"title":"Fix URL in gem dataset for totto config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653326172000,"updated_at":1653371351000,"closed_at":1653370860000,"author_association":"MEMBER","active_lock_reason":null,"body":"As commented in:\r\n- https:\/\/github.com\/huggingface\/datasets\/issues\/4386#issuecomment-1134902372\r\n\r\nCC: @StevenTang1998","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4396\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4396\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4396","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4396","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4396.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4396.patch","merged_at":1653370859000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4395","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4395\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4395\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4395\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4395","id":1245436486,"node_id":"PR_kwDODunzps44TrBA","number":4395,"title":"Add Pascal VOC dataset","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4395). All of your documentation changes will be reflected on that endpoint.","Some CI fails are unrelated to your PR and fixed on master, feel free to merge master into your branch :)"],"created_at":1653323645000,"updated_at":1657120793000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds the Pascal VOC dataset in the same way TFDS has it added. I believe we can iterate on this dataset and in future versions include more data, such as segmentation masks, but for now I think it is a good idea to just add it the same way as TFDS to get a solid first version out there.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4395\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4395\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4395","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4395","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4395.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4395.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4394","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4394\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4394\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4394\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4394","id":1245221657,"node_id":"I_kwDODunzps5KOJMZ","number":4394,"title":"trainer became extremely slow after reload dataset by `load_from_disk`","user":{"login":"conan1024hao","id":50416856,"node_id":"MDQ6VXNlcjUwNDE2ODU2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50416856?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/conan1024hao","html_url":"https:\/\/github.com\/conan1024hao","followers_url":"https:\/\/api.github.com\/users\/conan1024hao\/followers","following_url":"https:\/\/api.github.com\/users\/conan1024hao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/conan1024hao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/conan1024hao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/conan1024hao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/conan1024hao\/orgs","repos_url":"https:\/\/api.github.com\/users\/conan1024hao\/repos","events_url":"https:\/\/api.github.com\/users\/conan1024hao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/conan1024hao\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I tried to make the dataset much more smaller (100000 rows) , then the speed became `33.88it\/s` from`8.62s\/it`. It's nearly 200 times... Do you have any idea? Thank you!","Similar issue: https:\/\/github.com\/huggingface\/transformers\/issues\/8818\r\n\r\nI changed `RandomSampler` to `SequentialSampler` in the `trainer.py`, but the speed didn't become faster.","I changed\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"\/pathto\/dataset\"\r\n )\r\n```\r\nto\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"\/pathto\/dataset\", keep_in_memory=True\r\n )\r\n```\r\nand obtained normal speed. It's seems that the problem is on the os's speed limit.","Hi ! Currently `save_to_disk` saves one big Arrow file, which causes some slow downs. This has been discussed in #3735 and we'll implement sharding pretty soon to solve this\r\n\r\nFor now you can try splitting and saving your dataset in several Arrow files. Then you can load them one by one and use `concatenate_datasets` to have your big dataset again and hopefully with a better speed"],"created_at":1653314677000,"updated_at":1654531681000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nDue to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card.\r\nSince I am try to pre-train a BERT, **my dataset is very large(29058165 rows)**\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ntokenized_datasets.save_to_disk(\r\n \"\/pathto\/dataset\"\r\n )\r\ntokenized_datasets = load_from_disk(\r\n \"\/pathto\/dataset\"\r\n )\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=tokenized_datasets[\"train\"] if training_args.do_train else None,\r\n eval_dataset=tokenized_datasets[\"validation\"]\r\n if training_args.do_eval\r\n else None,\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n )\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n```\r\n\r\n## Expected results\r\nWithout the save and reload process, I only need about one day to run the whole script with one A100 card.\r\n\r\n## Actual results\r\n```\r\n[INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training *****\r\n[INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165\r\n[INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5\r\n[INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16\r\n[INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256\r\n[INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2\r\n[INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540\r\n 0%| | 1\/567540 [00:09<1544:49:04, 9.80s\/it]\r\n 0%| | 2\/567540 [00:17<1320:00:17, 8.37s\/it]\r\n 0%| | 3\/567540 [00:26<1393:10:17, 8.84s\/it]\r\n 0%| | 4\/567540 [00:34<1344:56:33, 8.53s\/it]\r\n 0%| | 5\/567540 [00:43<1359:36:12, 8.62s\/it]\r\n```\r\n\r\n## Environment info\r\n```\r\ntorch 1.11.0+cu113\r\ntorchaudio 0.11.0+cu113\r\ntorchvision 0.12.0+cu113\r\ntransformers 4.18.0\r\ndatasets 2.2.2\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4394\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4394\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4393","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4393\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4393\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4393\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4393","id":1244876662,"node_id":"PR_kwDODunzps44RxWN","number":4393,"title":"Update CI deprecated legacy image","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653298542000,"updated_at":1653300508000,"closed_at":1653299995000,"author_association":"MEMBER","active_lock_reason":null,"body":"Now our CI still uses a deprecated legacy image:\r\n> You\u2019re using a [deprecated Docker convenience image.](https:\/\/discuss.circleci.com\/t\/legacy-convenience-image-deprecation\/41034) Upgrade to a next-gen Docker convenience image.\r\n\r\nThis PR updates to next-generation convenience image.\r\n\r\nRelated to:\r\n- #2955","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4393\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4393\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4393","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4393","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4393.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4393.patch","merged_at":1653299995000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4392","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4392\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4392\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4392\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4392","id":1244859971,"node_id":"PR_kwDODunzps44RtsX","number":4392,"title":"remove int documentation from logging docs","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653297895000,"updated_at":1653319015000,"closed_at":1653318512000,"author_association":"MEMBER","active_lock_reason":null,"body":"Removes the `int` documentation from the [logging section](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/logging_methods#levels) of the docs.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4392\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4392\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4392","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4392","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4392.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4392.patch","merged_at":1653318512000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4391","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4391\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4391\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4391\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4391","id":1244839185,"node_id":"PR_kwDODunzps44RpGv","number":4391,"title":"Refactor column mappings for question answering datasets","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","> Thanks.\r\n> \r\n> I have no visibility about this, but if you say it is more useful for AutoTrain this way...\r\n\r\nThanks for the review @albertvillanova ! Yes, I need some way to reconstruct the original column names with a period because that's how they appear after we flatten the nested columns. In any case, we can adjust this later if needed :)","Does that mean that we need to change the metadata?","> Does that mean that we need to change the metadata?\r\n\r\nYes, but this PR takes care of it :)","Oh good! thanks for the heads up!"],"created_at":1653297194000,"updated_at":1653397020000,"closed_at":1653396528000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain.\r\n\r\nAs observed in https:\/\/github.com\/huggingface\/datasets\/pull\/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR.\r\n\r\ncc @sashavor ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4391\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4391\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4391","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4391","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4391.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4391.patch","merged_at":1653396528000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4390","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4390\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4390\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4390\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4390","id":1244835877,"node_id":"PR_kwDODunzps44RoXs","number":4390,"title":"Fix metadata validation","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653297080000,"updated_at":1654075672000,"closed_at":1654075165000,"author_association":"MEMBER","active_lock_reason":null,"body":"Since Python 3.8, the typing module:\r\n- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`\r\n- provides the `get_args` function instead: `get_args(List)`\r\n\r\nThis PR implements a fix for Python >=3.8 whereas maintaining backward compatibility.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4390\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4390\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4390","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4390","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4390.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4390.patch","merged_at":1654075165000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4389","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4389\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4389\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4389\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4389","id":1244693690,"node_id":"PR_kwDODunzps44RKMn","number":4389,"title":"Fix bug in gem dataset for wiki_auto_asset_turk config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653290389000,"updated_at":1653302306000,"closed_at":1653301795000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR fixes some URLs.\r\n\r\n\r\nFix #4386.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4389\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4389\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4389","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4389","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4389.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4389.patch","merged_at":1653301795000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4388","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4388\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4388\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4388\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4388","id":1244645158,"node_id":"PR_kwDODunzps44RAG1","number":4388,"title":"Set builder name from module instead of class","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653287195000,"updated_at":1653456283000,"closed_at":1653455775000,"author_association":"MEMBER","active_lock_reason":null,"body":"Now the builder name attribute is set from from the builder class name.\r\n\r\nThis PR sets the builder name attribute from the module name instead. Some motivating reasons:\r\n- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset\r\n- The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name\r\n- On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module\/direcotry\/dataset_id\r\n\r\nIMO it makes more sense to align the caching directory name with the dataset_id\/directory\/module name instead of the builder class name.\r\n\r\nFix #4381.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4388\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4388\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4388","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4388","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4388.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4388.patch","merged_at":1653455775000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4387","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4387\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4387\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4387\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4387","id":1244147817,"node_id":"I_kwDODunzps5KKDBp","number":4387,"title":"device\/google\/accessory\/adk2012 - Git at Google","user":{"login":"Aeckard45","id":87345839,"node_id":"MDQ6VXNlcjg3MzQ1ODM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/87345839?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Aeckard45","html_url":"https:\/\/github.com\/Aeckard45","followers_url":"https:\/\/api.github.com\/users\/Aeckard45\/followers","following_url":"https:\/\/api.github.com\/users\/Aeckard45\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Aeckard45\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Aeckard45\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Aeckard45\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Aeckard45\/orgs","repos_url":"https:\/\/api.github.com\/users\/Aeckard45\/repos","events_url":"https:\/\/api.github.com\/users\/Aeckard45\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Aeckard45\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1653195439000,"updated_at":1653287787000,"closed_at":1653287787000,"author_association":"NONE","active_lock_reason":null,"body":"\"git clone https:\/\/android.googlesource.com\/device\/google\/accessory\/adk2012\"\n https:\/\/android.googlesource.com\/device\/google\/accessory\/adk2012\/#:~:text=git%20clone%20https%3A\/\/android.googlesource.com\/device\/google\/accessory\/adk2012","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4387\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4387\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4386","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4386\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4386\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4386\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4386","id":1243965532,"node_id":"I_kwDODunzps5KJWhc","number":4386,"title":"Bug for wiki_auto_asset_turk from GEM","user":{"login":"StevenTang1998","id":37647985,"node_id":"MDQ6VXNlcjM3NjQ3OTg1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37647985?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/StevenTang1998","html_url":"https:\/\/github.com\/StevenTang1998","followers_url":"https:\/\/api.github.com\/users\/StevenTang1998\/followers","following_url":"https:\/\/api.github.com\/users\/StevenTang1998\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/StevenTang1998\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/StevenTang1998\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/StevenTang1998\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/StevenTang1998\/orgs","repos_url":"https:\/\/api.github.com\/users\/StevenTang1998\/repos","events_url":"https:\/\/api.github.com\/users\/StevenTang1998\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/StevenTang1998\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ","Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip install git+https:\/\/github.com\/huggingface\/datasets#egg=datasets\r\n```","Thanks for your reply!!\r\nAnd the totto dataset has the same problem. The url should be change to [https:\/\/storage.googleapis.com\/totto-public\/totto_data.zip](https:\/\/storage.googleapis.com\/totto-public\/totto_data.zip).","Hi again @StevenTang1998,\r\n\r\nI don't see any problem when loading `totto` dataset:\r\n```python\r\nIn [4]: import datasets\r\n ...: ds = datasets.load_dataset(\"totto\")\r\nDownloading builder script: 5.58kB [00:00, 5.33MB\/s] \r\nDownloading metadata: 2.78kB [00:00, 2.96MB\/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset totto\/default (download: 179.03 MiB, generated: 706.59 MiB, post-processed: Unknown size, total: 885.62 MiB) to ...\/.cache\/huggingface\/datasets\/totto\/default\/1.0.0\/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 188M\/188M [00:32<00:00, 5.77MB\/s]\r\nDataset totto downloaded and prepared to ...\/.cache\/huggingface\/datasets\/totto\/default\/1.0.0\/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 147.95it\/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 120761\r\n })\r\n validation: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n test: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n})\r\n```","Sorry, I didn't express it clearly. It's the totto dataset from gem.\r\ndatasets.load_dataset('gem', 'totto')\r\n","@StevenTang1998 fixed in:\r\n- #4396","Thanks!!"],"created_at":1653136290000,"updated_at":1653371752000,"closed_at":1653301795000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nThe script of wiki_auto_asset_turk for GEM may be out of date.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\ndatasets.load_dataset('gem', 'wiki_auto_asset_turk')\r\n```\r\n\r\n## Actual results\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1731, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 640, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1158, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 707, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/tangtianyi\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/gem\/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1\/gem.py\", line 538, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_URLs[self.config.name])\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 416, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 294, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 351, in map_nested\r\n mapped = [\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 352, in \r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 288, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 320, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 234, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/home\/tangtianyi\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 579, in get_from_cache\r\n raise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https:\/\/github.com\/facebookresearch\/asset\/raw\/master\/dataset\/asset.test.orig\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4386\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4386\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4385","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4385\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4385\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4385\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4385","id":1243921287,"node_id":"PR_kwDODunzps44OwXF","number":4385,"title":"Test dill","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I should point out that the hash will be the same if computed twice with the same code on the same version of dill (after adding huggingface's code that removes line numbers and file names, and sorts globals.) My changes in dill 0.3.5 and ones that I will make in 0.3.6 will result in different pickles than the ones dill 0.3.4 was making. This should still be fine for caching.","Just some comments @lhoestq:\r\n\r\nThe best practice for testing is to have a `test_.py` for each `.py`. Therefore in order to have the filenames aligned, I would propose:\r\n- either renaming `fingerprint.py` to `caching.py`\r\n- or renaming `test_caching.py` to `test_fingerprint.py`\r\n\r\nOn the other hand, my idea when implementing this test was not to test all the functionalities of the `Hasher`, but just to have a regression test that fails if dill version is > 0.3.4 and the pin in our `setup.py` is not present. Just recall that we had no failing test in our CI when the issue with dill was found on `transformers`.\r\n\r\nThe objective of this PR is just to have a regression test for that case: I tested and I got `AttributeError: module 'dill._dill' has no attribute 'stack'`\r\n\r\nFor this regression test, I took into account this comment by @gugarosa: https:\/\/github.com\/huggingface\/datasets\/issues\/4379#issuecomment-1133131825\r\n\r\nThere is no equivalent test in `test_caching.py` because our CI did not fail before pinning dill.","Ok I see, renaming it to `test_fingerprint.py` sounds like a good idea :)"],"created_at":1653123463000,"updated_at":1653467413000,"closed_at":1653466908000,"author_association":"MEMBER","active_lock_reason":null,"body":"Regression test for future releases of `dill`.\r\n\r\nRelated to #4379. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4385\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4385\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4385","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4385","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4385.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4385.patch","merged_at":1653466908000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4384","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4384\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4384\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4384\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4384","id":1243919748,"node_id":"PR_kwDODunzps44OwFr","number":4384,"title":"Refactor download","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","This looks like a breaking change no ?\r\nAlso could you explain why it would be better this way ?","The might be only there to help type checkers, but I am not too familiar with the code base to know for sure. I think this might be useful:\n\nhttps:\/\/docs.python.org\/3\/library\/typing.html#typing.TYPE_CHECKING","> This looks like a breaking change no ?\r\n> Also could you explain why it would be better this way ?\r\n\r\nSorry, @lhoestq, I naively thought it was obvious. I have tried to give some arguments in the motivation of this PR (see above). I can give additional arguments if needed. "],"created_at":1653122964000,"updated_at":1653475922000,"closed_at":1653475423000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package \"download\". Some motivating arguments:\r\n- understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities\r\n- abstraction: the level of abstraction of \"download\" (higher) is not the same as \"utils\" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower\r\n- architectural: \"download\" is a domain-specific functionality of our library\/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, \"utils\" are always a low-level set of functionalities, not directly related to our domain\/business core logic (all libraries have \"utils\"), thus at the periphery of our lib architecture\r\n\r\nAlso note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements.\r\n\r\nAs a concrete example in this case, please see: https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/12185\/workflows\/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d\/jobs\/72860\r\n- After an extension, a circular import is found\r\n- Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction:\r\n```\r\nImportError while loading conftest '\/home\/circleci\/datasets\/tests\/conftest.py'.\r\ntests\/conftest.py:12: in \r\n import datasets\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/datasets\/__init__.py:37: in \r\n from .arrow_dataset import Dataset, concatenate_datasets\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py:59: in \r\n from . import config\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/datasets\/config.py:8: in \r\n from .utils.logging import get_logger\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/datasets\/utils\/__init__.py:30: in \r\n from .download_manager import DownloadConfig, DownloadManager, DownloadMode\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/datasets\/utils\/download_manager.py:39: in \r\n from .py_utils import NestedDataStructure, map_nested, size_str\r\n..\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py:608: in \r\n if config.DILL_VERSION < version.parse(\"0.3.5\"):\r\nE AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION'\r\n```\r\n\r\nImports:\r\n- datasets\r\n - Dataset: lower level than datasets\r\n - config: lower level than Dataset\r\n - logger: lower level than config\r\n - DownloadManager: !!! HIGHER level of abstraction than logger!!\r\n\r\nWhy when importing logger we require importing DownloadManager?!?\r\n- Logically, it does not make sense\r\n- This is due to an error in the design\/architecture of our library:\r\n - To import the logger, we need to import it from `.utils.logging`\r\n - To import `.utils.logging` we need to import `.utils`\r\n - The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`!\r\n\r\nWhen putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4384\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4384\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4384","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4384","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4384.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4384.patch","merged_at":1653475423000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4383","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4383\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4383\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4383\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4383","id":1243856981,"node_id":"I_kwDODunzps5KI8BV","number":4383,"title":"L","user":{"login":"AronCodes21","id":99847861,"node_id":"U_kgDOBfOOtQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/99847861?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AronCodes21","html_url":"https:\/\/github.com\/AronCodes21","followers_url":"https:\/\/api.github.com\/users\/AronCodes21\/followers","following_url":"https:\/\/api.github.com\/users\/AronCodes21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AronCodes21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AronCodes21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AronCodes21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AronCodes21\/orgs","repos_url":"https:\/\/api.github.com\/users\/AronCodes21\/repos","events_url":"https:\/\/api.github.com\/users\/AronCodes21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AronCodes21\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1653104878000,"updated_at":1653160813000,"closed_at":1653160813000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the L\nL\n## Expected L\nA clear and concise lmll\nSpecify the actual results or traceback.\n\n## Environment info\n\n- `datasets` version:\n- Platform:\n- Python version:\n- PyArrow version:","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4383\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4383\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4382","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4382\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4382\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4382\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4382","id":1243839783,"node_id":"I_kwDODunzps5KI30n","number":4382,"title":"First time trying","user":{"login":"Aeckard45","id":87345839,"node_id":"MDQ6VXNlcjg3MzQ1ODM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/87345839?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Aeckard45","html_url":"https:\/\/github.com\/Aeckard45","followers_url":"https:\/\/api.github.com\/users\/Aeckard45\/followers","following_url":"https:\/\/api.github.com\/users\/Aeckard45\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Aeckard45\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Aeckard45\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Aeckard45\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Aeckard45\/orgs","repos_url":"https:\/\/api.github.com\/users\/Aeckard45\/repos","events_url":"https:\/\/api.github.com\/users\/Aeckard45\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Aeckard45\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1653099318000,"updated_at":1653160844000,"closed_at":1653160844000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4382\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4382\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4381","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4381\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4381\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4381\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4381","id":1243478863,"node_id":"I_kwDODunzps5KHftP","number":4381,"title":"Bug in caching 2 datasets both with the same builder class name","user":{"login":"NouamaneTazi","id":29777165,"node_id":"MDQ6VXNlcjI5Nzc3MTY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29777165?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NouamaneTazi","html_url":"https:\/\/github.com\/NouamaneTazi","followers_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/followers","following_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/orgs","repos_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/repos","events_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NouamaneTazi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`","Hi @NouamaneTazi, please note that after our fix:\r\n- #4388\r\n\r\nwe do not consider the class name anymore, but the name of the file where the loading builder class is implemented. "],"created_at":1653070683000,"updated_at":1654157917000,"closed_at":1653455775000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nThe two datasets `mteb\/mtop_intent` and `mteb\/mtop_domain `use both the same cache folder `.cache\/huggingface\/datasets\/mteb___mtop`. So if you first load `mteb\/mtop_intent` then datasets will not load `mteb\/mtop_domain`.\r\nIf you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text).\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"mteb\/mtop_intent\", \"en\")\r\nprint(dataset['train'][0])\r\ndataset = datasets.load_dataset(\"mteb\/mtop_domain\", \"en\")\r\nprint(dataset['train'][0])\r\n```\r\n\r\n## Expected results\r\n```\r\nReusing dataset mtop (\/home\/nouamane\/.cache\/huggingface\/datasets\/mteb___mtop_intent\/en\/0.0.0\/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 920.14it\/s]\r\n{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}\r\nReusing dataset mtop (\/home\/nouamane\/.cache\/huggingface\/datasets\/mteb___mtop_domain\/en\/0.0.0\/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 1307.59it\/s]\r\n{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'}\r\n```\r\n\r\n## Actual results\r\n```\r\nReusing dataset mtop (\/home\/nouamane\/.cache\/huggingface\/datasets\/mteb___mtop\/en\/0.0.0\/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 920.14it\/s]\r\n{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}\r\nReusing dataset mtop (\/home\/nouamane\/.cache\/huggingface\/datasets\/mteb___mtop\/en\/0.0.0\/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 1307.59it\/s]\r\n{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}\r\n```\r\n## Environment info\r\n\r\n- `datasets` version: 2.2.1\r\n- Platform: macOS-12.1-arm64-arm-64bit\r\n- Python version: 3.9.12\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4381\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4381\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4380","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4380\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4380\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4380\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4380","id":1243183054,"node_id":"PR_kwDODunzps44MUz0","number":4380,"title":"Pin dill","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653054859000,"updated_at":1655114632000,"closed_at":1653064384000,"author_association":"MEMBER","active_lock_reason":null,"body":"Hotfix #4379.\r\n\r\nCC: @sgugger ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4380\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4380\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4380","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4380","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4380.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4380.patch","merged_at":1653064384000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4379","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4379\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4379\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4379\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4379","id":1243175854,"node_id":"I_kwDODunzps5KGVuu","number":4379,"title":"Latest dill release raises exception","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Fixed by:\r\n- #4380 ","Just an additional insight, the latest dill (either 0.3.5 or 0.3.5.1) also broke the hashing\/fingerprinting of any mapping function.\r\n\r\nFor example:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"rotten_tomatoes\")\r\nd.map(lambda x: x)\r\n```\r\n\r\nReturns the standard non-dillable error:\r\n```\r\nParameter 'function'= at 0x7fe7d18c9560> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly....\r\n```","@albertvillanova ExamplesTests.test_run_speech_recognition_seq2seq is in which file?","Thanks a lot @gugarosa for the insight: we will incorporate it in our CI as regression testing for future dill releases.","Hi @anivegesana, that test is in `transformers` library:\r\n- https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/test_pytorch_examples.py#L449\r\n- https:\/\/github.com\/huggingface\/transformers\/blob\/main\/examples\/pytorch\/speech-recognition\/run_speech_recognition_seq2seq.py ","@albertvillanova\n\nI did a deep dive into @gugarosa's problem and found the issue and it might be related to the one @sgugger discovered. In dill 0.3.5(.1), I created a new `save_function` that fixes a bug in dill that prevented the pickling of recursive inner functions. It was a more complete solution to the problem that `dill._dill.stack` tried to solve in the internal API of dill. Since `dill._dill.stack` was no longer needed, I removed it. Since datasets copies the `save_function` directly from the dill API, it stops working with the new dill version since `dill._dill.stack` is no longer present and the `save_function` has been updated with new code.\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/95193ae61e92aa537d0c65d37a1fd9d2393aae89\/src\/datasets\/utils\/py_utils.py#L607-L678\r\n\r\n~If the dill version is below 0.3.5, you should keep this function. If it is after, you would need to update your copy of `save_function` to use the code I introduced, or manually add a `stack` variable to `dill._dill` if it doesn't exist. Fortunately, in any version of Python 3.7+, dictionaries are always in insertion order and dill no longer supports Python 3.6 or older. So, any globals dictionary saved by dill 0.3.5+ will be deterministic given that the version of dill is held constant and this save_function is unnecessary for newer versions of dill.~\r\n\r\nAh. I see what is happening. I guess a different copy of the function code is needed that sorts the global variables by name.\r\n\r\n```py\r\nif dill.__version__.split('.') < ['0', '3', '5']:\r\n # current save_function code inside here\r\nelse:\r\n # new save_function code inside here with the following line inserted after creating the globals\r\n globs = {k: globs[k] for k in sorted(globs.keys())} \r\n```\r\n\r\nWill look into the test case @sgugger pointed out after that and verify if this is causing the problem.\r\n\r\nI am actually looking into rewriting the global variables code in uqfoundation\/dill#466 and will keep this in mind and will try to create an easy way to modify the global variables in dill 0.3.6 (for example, sort them by key like datasets does).","Thanks a lot for your investigation @anivegesana.\r\n\r\nYes, we copied-pasted the old `save_function` function from `dill`, just adding a line to make deterministic the order of global variables `globs`. \r\n\r\nHowever, this function has changed a lot from version 0.3.5, after your PR (thank you for the fix in recursiveness, indeed):\r\n- uqfoundation\/dill#443\r\n\r\nWe have to address this change.\r\n\r\nIf finally your PR to sort global variables is merged into dill 0.3.6, that will make our life easier, as the tweak will no longer be necessary. ;)\r\n\r\nI have included a regression test so that we are sure future releases of dill do not break `datasets`:\r\n- #4385 ","I should note that because Python 3.6 and older are now deprecated and Python 3.7 has insertion order dictionaries, the globals in dill will have a deterministic order, just not sorted. I would still keep it sorted like you have it to help with stability (for example, if someone reorders variables in a file, then sorting the globals would not invalidate the cache.)\n\nIt seems that the order is not quite deterministic in IPython. Huggingface datasets seems to do well in Jupyter regardless, so it is not a good idea to remove the sorting. uqfoundation\/dill#19"],"created_at":1653054516000,"updated_at":1653148406000,"closed_at":1653066387000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nAs reported by @sgugger, latest dill release is breaking things with Datasets.\r\n\r\n```\r\n______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________\r\n\r\n\r\nself = , timeout = None\r\n\r\n def get(self, timeout=None):\r\n self.wait(timeout)\r\n if not self.ready():\r\n raise TimeoutError\r\n if self._success:\r\n return self._value\r\n else:\r\n> raise self._value\r\nE TypeError: '>' not supported between instances of 'NoneType' and 'float'\r\n```\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4379\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4379\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4378","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4378\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4378\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4378\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4378","id":1242935373,"node_id":"PR_kwDODunzps44Lf2R","number":4378,"title":"Tidy up license metadata for google_wellformed_query, newspop, sick","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","& thank you!"],"created_at":1653041772000,"updated_at":1653400223000,"closed_at":1653397827000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4378\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4378\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4378","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4378","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4378.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4378.patch","merged_at":1653397827000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4377","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4377\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4377\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4377\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4377","id":1242746186,"node_id":"PR_kwDODunzps44K4OY","number":4377,"title":"Fix checksum and bug in irc_disentangle dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1653031768000,"updated_at":1653039276000,"closed_at":1653038792000,"author_association":"MEMBER","active_lock_reason":null,"body":"There was a bug in filepath segment: \r\n- wrong: `jkkummerfeld-irc-disentanglement-fd379e9`\r\n- right: `jkkummerfeld-irc-disentanglement-35f0a40`\r\n\r\nAlso there was a bug in the checksum of the downloaded file.\r\n\r\nThis PR fixes these issues.\r\n\r\nFix partially #4376.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4377\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4377\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4377","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4377","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4377.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4377.patch","merged_at":1653038792000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4376","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4376\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4376\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4376\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4376","id":1242218144,"node_id":"I_kwDODunzps5KCr6g","number":4376,"title":"irc_disentagle viewer error","user":{"login":"labouz","id":25671683,"node_id":"MDQ6VXNlcjI1NjcxNjgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25671683?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/labouz","html_url":"https:\/\/github.com\/labouz","followers_url":"https:\/\/api.github.com\/users\/labouz\/followers","following_url":"https:\/\/api.github.com\/users\/labouz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/labouz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/labouz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/labouz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/labouz\/orgs","repos_url":"https:\/\/api.github.com\/users\/labouz\/repos","events_url":"https:\/\/api.github.com\/users\/labouz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/labouz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["DUPLICATED comment from https:\/\/github.com\/huggingface\/datasets\/issues\/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:\/\/\/Library\/Frameworks\/Python.framework\/Versions\/3.8\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:\/\/\/Library\/Frameworks\/Python.framework\/Versions\/3.8\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:\/\/\/Library\/Frameworks\/Python.framework\/Versions\/3.8\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:\/\/\/Library\/Frameworks\/Python.framework\/Versions\/3.8\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/github.com\/jkkummerfeld\/irc-disentanglement\/tarball\/master']\r\n```\r\nI attempted to use the `ignore_verifications' as such:\r\n\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\nDownloading builder script: 12.0kB [00:00, 5.92MB\/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB\/s] \r\nNo config specified, defaulting to: irc_disentangle\/ubuntu\r\nDownloading and preparing dataset irc_disentangle\/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to \/Users\/laylabouzoubaa\/.cache\/huggingface\/datasets\/irc_disentangle\/ubuntu\/1.0.0\/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB\/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to \/Users\/laylabouzoubaa\/.cache\/huggingface\/datasets\/irc_disentangle\/ubuntu\/1.0.0\/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 675.38it\/s]\r\n```\r\nbut, this returns an empty set?\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\nnot sure what else to try at this point?\r\nThanks in advanced\ud83e\udd17","Thanks for reporting, @labouz. I'm addressing it. ","The issue with checksum and empty dataset has been fixed by:\r\n- #4377\r\n\r\nTo load the dataset, you should force the re-generation of the dataset from the downloaded file by passing `download_mode=\"reuse_cache_if_exists\"` to `load_dataset`.\r\n\r\nIn relation with the issue with the dataset viewer, first the dataset should be refactored to support streaming.","parfait!\r\nit works now, thank you \ud83d\ude4f "],"created_at":1652987716000,"updated_at":1654158000000,"closed_at":1654158000000,"author_association":"NONE","active_lock_reason":null,"body":"the dataviewer shows this message for \"ubuntu\" - \"train\", \"test\", and \"validation\" splits:\r\n```\r\nServer error\r\nStatus code: 400\r\nException: ValueError\r\nMessage: Cannot seek streaming HTTP file\r\n\r\n```\r\nit appears to give the same message for the \"channel_two\" data as well.\r\n\r\nI get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https:\/\/github.com\/huggingface\/datasets\/issues\/3807 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4376\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4376\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4375","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4375\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4375\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4375\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4375","id":1241921147,"node_id":"PR_kwDODunzps44IMCS","number":4375,"title":"Support DataLoader with num_workers > 0 in streaming mode","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Alright this is finally ready for review ! It's quite long I'm sorry, but it's not easy to disentangle everything ^^'\r\n\r\nThe main additions are in\r\n- src\/datasets\/formatting\/dataset_wrappers\/torch_iterable_dataset.py\r\n- src\/datasets\/iterable_dataset.py\r\n- src\/datasets\/utils\/patching.py","Added some comments and an error when lists have different lengths for sharding :)","Let's resolve the merge conflict and the CI error (if it's related to the changes), and I can review the PR again.","Feel free to review again :) The CI fail is unrelated to this PR and will be fixed by https:\/\/github.com\/huggingface\/datasets\/pull\/4472 (the hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos)","CI failures are unrelated to this PR - merging :)\r\n\r\n(CI fails are a mix of pip install fails and Hub fails)","@lhoestq you're our hero :)"],"created_at":1652972431000,"updated_at":1656950714000,"closed_at":1654894047000,"author_association":"MEMBER","active_lock_reason":null,"body":"### Issue\r\n\r\nIt's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers:\r\n\r\n- the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https:\/\/github.com\/huggingface\/datasets\/issues\/3950\r\n- streaming extension is failing: https:\/\/github.com\/huggingface\/datasets\/issues\/3951\r\n- `fsspec` doesn't work out of the box in subprocesses\r\n\r\n### Solution in this PR\r\n\r\nI fixed these to enable passing an `IterableDataset` to a `torch.utils.data.DataLoader` with `num_workers > 0`.\r\n\r\nI also had to shard the `IterableDataset` to give each worker a shard, otherwise data would be duplicated. This is implemented in `TorchIterableDataset.__iter__` and uses the new `IterableDataset._iter_shard(shard_idx)` method\r\n\r\nI also had to do a few changes the patching that enable streaming in dataset scripts:\r\n- the patches are now always applied - not just for streaming mode. They're applied when a builder is instantiated\r\n- I improved it to also check for renamed modules or attributes (ex: pandas vs pd)\r\n- I grouped all the patches of pathlib.Path into a class `xPath`, so that `Path` outside of dataset scripts stay unchanged - otherwise I didn't change the content of the extended Path methods for streaming\r\n- I fixed a bug with the `pd.read_csv` patch, opening the file in \"rb\" mode was missing and causing some datasets to not work in streaming mode, and compression inference was missing\r\n\r\n### A few details regarding `fsspec` in multiprocessing\r\n\r\nFrom https:\/\/github.com\/fsspec\/filesystem_spec\/pull\/963#issuecomment-1131709948 :\r\n> Non-async instances might be safe in the forked child, if they hold no open files\/sockets etc.; I'm not sure any implementations pass this test!\r\n> If any async instance has been created, the newly forked processes must:\r\n> 1. discard references to locks, threads and event loops and make new ones\r\n> 2. not use any async fsspec instances from the parent process\r\n> 3. clear all class instance caches\r\n\r\nTherefore in a DataLoader's worker, I clear the reference to the loop and thread (1). We should be fine for 2 and 3 already since we don't use fsspec class instances from the parent process.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/3950\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/3951\r\n\r\nTODO:\r\n- [x] fix tests","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4375\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4375\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4375","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4375","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4375.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4375.patch","merged_at":1654894046000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4374","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4374\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4374\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4374\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4374","id":1241860535,"node_id":"I_kwDODunzps5KBUm3","number":4374,"title":"extremely slow processing when using a custom dataset ","user":{"login":"StephennFernandes","id":32235549,"node_id":"MDQ6VXNlcjMyMjM1NTQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32235549?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/StephennFernandes","html_url":"https:\/\/github.com\/StephennFernandes","followers_url":"https:\/\/api.github.com\/users\/StephennFernandes\/followers","following_url":"https:\/\/api.github.com\/users\/StephennFernandes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/StephennFernandes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/StephennFernandes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/StephennFernandes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/StephennFernandes\/orgs","repos_url":"https:\/\/api.github.com\/users\/StephennFernandes\/repos","events_url":"https:\/\/api.github.com\/users\/StephennFernandes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/StephennFernandes\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nMy guess is that some examples in your dataset are bigger than your RAM, and therefore loading them in RAM to pass them to `remove_non_indic_sentences` takes forever because it might use SWAP memory.\r\n\r\nMaybe several examples in your dataset are grouped together, can you check `len(lang_dataset[\"train\"])` and `lang_dataset[\"train\"].data.nbytes` of both datasets please ? It can also be helpful to check the distribution of lengths of each examples in your dataset."],"created_at":1652969885000,"updated_at":1654532068000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub\r\n\r\nI have a large .txt file of 22 GB which i load into HF dataset \r\n\r\n`lang_dataset = datasets.load_dataset(\"text\", data_files=\"hi.txt\")`\r\n\r\nfurther i use a pre-processing function to clean the dataset \r\n\r\n `lang_dataset[\"train\"] = lang_dataset[\"train\"].map(\r\n remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)`\r\n\r\nthe following processing takes astronomical time to process, while hoging all the ram. \r\n\r\nsimilar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data. \r\n`lang_dataset = datasets.load_dataset(\"oscar-corpus\/OSCAR-2109\", \"hi\", use_auth_token=True)`\r\n\r\nthe hours predicted to preprocess are as follows:\r\n\r\nhuggingface hub dataset: 6.5 hrs \r\ncustom loaded dataset: 7000 hrs\r\n\r\nnote: both the datasets are almost actually same, just provided by different sources with has +\/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format. \r\n\r\n## Steps to reproduce the bug\r\n```\r\nimport datasets\r\nimport psutil\r\nimport sys \r\nimport glob \r\nfrom fastcore.utils import listify\r\nimport re \r\nimport gc \r\n\r\ndef remove_non_indic_sentences(example): \r\n tmp_ls = []\r\n eng_regex = r'[. a-zA-Z0-9\u00d6\u00c4\u00c5\u00f6\u00e4\u00e5 _.,!\"\\'\\\/$]*'\r\n for e in listify(example['text']):\r\n matches = re.findall(eng_regex, e)\r\n for match in (str(match).strip() for match in matches if match not in [\"\",\" \", \" \", \",\", \" ,\", \", \", \" , \"]):\r\n if len(list(match.split(\" \"))) > 2:\r\n e = re.sub(match,\" \",e,count=1)\r\n tmp_ls.append(e)\r\n gc.collect()\r\n example['clean_text'] = tmp_ls\r\n return example\r\n\r\nlang_dataset = datasets.load_dataset(\"text\", data_files=\"hi.txt\")\r\n\r\nlang_dataset[\"train\"] = lang_dataset[\"train\"].map(\r\n remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)\r\n\r\n\r\n## same thing work much faster when loading similar dataset from hub\r\n \r\nlang_dataset = datasets.load_dataset(\"oscar-corpus\/OSCAR-2109\", \"hi\", split=\"train\", use_auth_token=True)\r\n\r\nlang_dataset[\"train\"] = lang_dataset[\"train\"].map(\r\n remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)\r\n\r\n```\r\n\r\n## Actual results\r\n\r\nsimilar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data. \r\n`lang_dataset = datasets.load_dataset(\"oscar-corpus\/OSCAR-2109\", \"hi\", use_auth_token=True)\r\n\r\n\r\n**the hours predicted to preprocess are as follows:**\r\nhuggingface hub dataset: 6.5 hrs \r\ncustom loaded dataset: 7000 hrs\r\n\r\n**i even tried the following:**\r\n\r\n- sharding the large 22gb text files into smaller files and loading\r\n- saving the file to disk and then loading \r\n- using lesser num_proc \r\n- using smaller batch size \r\n- processing without batches ie : without `batched=True`\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.2.2.dev0\r\n- Platform: Ubuntu 20.04 LTS \r\n- Python version: 3.9.7 \r\n- PyArrow version:8.0.0 \r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4374\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4374\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4373","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4373\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4373\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4373\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4373","id":1241769310,"node_id":"PR_kwDODunzps44HsaY","number":4373,"title":"Remove links in docs to old dataset viewer","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652966679000,"updated_at":1653060268000,"closed_at":1653059765000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Remove the links in the docs to the no longer maintained dataset viewer.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4373\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4373\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4373","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4373","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4373.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4373.patch","merged_at":1653059765000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4372","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4372\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4372\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4372\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4372","id":1241703826,"node_id":"PR_kwDODunzps44HeYC","number":4372,"title":"Check if dataset features match before push in `DatasetDict.push_to_hub`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652963550000,"updated_at":1653060216000,"closed_at":1653059730000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4211 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4372\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4372\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4372","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4372","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4372.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4372.patch","merged_at":1653059730000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4371","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4371\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4371\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4371\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4371","id":1241500906,"node_id":"PR_kwDODunzps44GzSZ","number":4371,"title":"Add missing language tags for udhr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652952850000,"updated_at":1654689804000,"closed_at":1653039790000,"author_association":"MEMBER","active_lock_reason":null,"body":"Related to #4362.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4371\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4371\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4371","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4371","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4371.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4371.patch","merged_at":1653039790000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4369","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4369\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4369\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4369\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4369","id":1240245642,"node_id":"PR_kwDODunzps44CpCe","number":4369,"title":"Add redirect to dataset script in the repo structure page","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652893533000,"updated_at":1652948341000,"closed_at":1652947851000,"author_association":"MEMBER","active_lock_reason":null,"body":"Following https:\/\/github.com\/huggingface\/hub-docs\/pull\/146 I added a redirection to the dataset scripts documentation in the repository structure page.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4369\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4369\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4369","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4369","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4369.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4369.patch","merged_at":1652947851000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4368","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4368\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4368\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4368\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4368","id":1240064860,"node_id":"PR_kwDODunzps44CDFk","number":4368,"title":"Add long answer candidates to natural questions dataset","user":{"login":"seirasto","id":4257308,"node_id":"MDQ6VXNlcjQyNTczMDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4257308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/seirasto","html_url":"https:\/\/github.com\/seirasto","followers_url":"https:\/\/api.github.com\/users\/seirasto\/followers","following_url":"https:\/\/api.github.com\/users\/seirasto\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/seirasto\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/seirasto\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/seirasto\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/seirasto\/orgs","repos_url":"https:\/\/api.github.com\/users\/seirasto\/repos","events_url":"https:\/\/api.github.com\/users\/seirasto\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/seirasto\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Once we have added `long_answer_candidates` maybe it would be worth to also add the missing `candidate_index` (inside `long_answer`). What do you think, @seirasto ?","Also note the \"Data Fields\" section in the README is missing the `long_answer` field.\r\n\r\nMoreover, there is no instance example in \"Data Instances\" section.","We could either make these fixes in this PR or in a subsequent PR.","@albertvillanova I've added the missing fields and updated the README to include a data instance and some other things. ","Great! I've made the updates to align the README. Please let me know if I missed anything.","As there were many minor little fixes, I thought it would be easier to fix them directly.","I think the loading script is OK now. If it is also validated by another datasets maintainer, I could run the generation of the pre-processed data and then merge this PR into master (once all the tests are green).\r\n\r\nCC: @lhoestq ","It looks good to me, thanks @seirasto !","I have merged the master branch, so that we include all the fixes on Apache Beam + Google Dataflow.","Pre-processing is running!\r\n\r\nAlready finished for \"dev\" config:\r\n```python\r\nIn [2]: ds = load_dataset(\"datasets\/natural_questions\", \"dev\")\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n validation: Dataset({\r\n features: ['id', 'document', 'question', 'long_answer_candidates', 'annotations'],\r\n num_rows: 7830\r\n })\r\n})\r\n```","There is an issue while running the preprocessing for the \"default\" (train+dev) config. Train data files are larger than than dev ones and workers run out of memory.\r\n\r\nI'm opening a separate issue to handle this problem: #4525","@seirasto is proposing uploading their preprocessed data files to our Datasets bucket.\r\n\r\nI think @lhoestq can give a more informed answer about authentication requirements.","Now that the data fiels are uploaded, can you merge the `main` branch into yours to re-trigger the CI @seirasto please ? :) Then I think we can merge if it's good for you @albertvillanova ","Merge is done! I think someone needs to approve the CI to run :) ","Can you run `make style` to fix the code formatting required by the CI please ?","Thanks @albertvillanova! I've committed all your suggestions.","The CI is green. I'm merging this PR."],"created_at":1652884542000,"updated_at":1658867441000,"closed_at":1658866722000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https:\/\/github.com\/google-research-datasets\/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does not disturb the rest of the format . @lhoestq @albertvillanova ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4368\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4368\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4368","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4368","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4368.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4368.patch","merged_at":1658866722000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4367","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4367\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4367\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4367\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4367","id":1240011602,"node_id":"PR_kwDODunzps44B340","number":4367,"title":"Remove config names as yaml keys","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I included the change from https:\/\/github.com\/huggingface\/datasets\/pull\/4302 directly in this PR, this way the datasets will be updated right away in the CI (the CI is only triggered when a dataset card is changed)","_The documentation is not available anymore as the PR was closed or merged._","Alright it's ready now :)\r\n\r\nHere is an example for the `ade_corpus_v2` dataset card. Notice the new `configs` key:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/76d9a141740a03f6836feb251f6059894b8d8046\/datasets\/ade_corpus_v2\/README.md#L1-L78\r\n\r\nCI failures are only related to dataset cards missing some content."],"created_at":1652882364000,"updated_at":1653039326000,"closed_at":1653038839000,"author_association":"MEMBER","active_lock_reason":null,"body":"Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys. \r\n\r\nI fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway. I removed all the config names used as YAML keys, and I moved them in under a new `config:` key.\r\n\r\nThis is related to https:\/\/github.com\/huggingface\/datasets\/pull\/2362 (internal https:\/\/github.com\/huggingface\/moon-landing\/issues\/946).\r\n\r\nAlso removing the dots in the YAML keys would allow us to do as in https:\/\/github.com\/huggingface\/datasets\/pull\/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.\r\n\r\nI also added a test in the CI that checks that all the YAML tags to make sure that:\r\n\r\n- they can be parsed using a YAML parser\r\n- they contain only valid YAML tags like languages or task_ids","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4367\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4367\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4367","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4367","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4367.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4367.patch","merged_at":1653038839000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4366","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4366\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4366\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4366\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4366","id":1239534165,"node_id":"I_kwDODunzps5J4cpV","number":4366,"title":"TypeError: __init__() missing 1 required positional argument: 'scheme'","user":{"login":"jffgitt","id":99231535,"node_id":"U_kgDOBeonLw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/99231535?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jffgitt","html_url":"https:\/\/github.com\/jffgitt","followers_url":"https:\/\/api.github.com\/users\/jffgitt\/followers","following_url":"https:\/\/api.github.com\/users\/jffgitt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jffgitt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jffgitt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jffgitt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jffgitt\/orgs","repos_url":"https:\/\/api.github.com\/users\/jffgitt\/repos","events_url":"https:\/\/api.github.com\/users\/jffgitt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jffgitt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https:\/\/github.com\/elastic\/elasticsearch-py"],"created_at":1652858249000,"updated_at":1652891782000,"closed_at":1652891781000,"author_association":"NONE","active_lock_reason":null,"body":" \"name\" : \"node-1\",\r\n \"cluster_name\" : \"elasticsearch\",\r\n \"cluster_uuid\" : \"\",\r\n \"version\" : {\r\n \"number\" : \"7.5.0\",\r\n \"build_flavor\" : \"default\",\r\n \"build_type\" : \"tar\",\r\n \"build_hash\" : \"\",\r\n \"build_date\" : \"2019-11-26T01:06:52.518245Z\",\r\n \"build_snapshot\" : false,\r\n \"lucene_version\" : \"8.3.0\",\r\n \"minimum_wire_compatibility_version\" : \"6.8.0\",\r\n \"minimum_index_compatibility_version\" : \"6.0.0-beta1\"\r\n \r\nwhen I run the order:\r\nnohup python3 custom_service.pyc > service.log 2>&1&\r\n\r\nthe log:\r\nnohup: \u5ffd\u7565\u8f93\u5165\r\nTraceback (most recent call last):\r\n File \"\/home\/xfz\/p3_custom_test\/custom_service.py\", line 55, in \r\n File \"\/home\/xfz\/p3_custom_test\/custom_service.py\", line 48, in doInitialize\r\n File \"custom_impl.py\", line 286, in custom_setup\r\n File \"custom_impl.py\", line 127, in create_es_index\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/elasticsearch\/_sync\/client\/__init__.py\", line 345, in __init__\r\n ssl_show_warn=ssl_show_warn,\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/elasticsearch\/_sync\/client\/utils.py\", line 105, in client_node_configs\r\n node_configs = hosts_to_node_configs(hosts)\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/elasticsearch\/_sync\/client\/utils.py\", line 154, in hosts_to_node_configs\r\n node_configs.append(host_mapping_to_node_config(host))\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/elasticsearch\/_sync\/client\/utils.py\", line 221, in host_mapping_to_node_config\r\n return NodeConfig(**options) # type: ignore\r\nTypeError: __init__() missing 1 required positional argument: 'scheme'\r\n[1]+ \u9000\u51fa 1 nohup python3 custom_service.pyc > service.log 2>&1\r\n\r\ncustom_service_pyc can't running\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4366\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4366\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4365","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4365\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4365\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4365\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4365","id":1239109943,"node_id":"PR_kwDODunzps43-4fC","number":4365,"title":"Remove dots in config names","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Closing in favor of https:\/\/github.com\/huggingface\/datasets\/pull\/4367"],"created_at":1652818377000,"updated_at":1652882872000,"closed_at":1652882381000,"author_association":"MEMBER","active_lock_reason":null,"body":"20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.\r\n\r\nThis is related to https:\/\/github.com\/huggingface\/datasets\/pull\/2362 (internal https:\/\/github.com\/huggingface\/moon-landing\/issues\/946).\r\n\r\nAlso removing the dots in the config names would allow us to merge https:\/\/github.com\/huggingface\/datasets\/pull\/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.\r\n\r\nI also added a test in the CI that checks that all the YAML tags to make sure that:\r\n- they can be parsed using a YAML parser\r\n- they contain only valid YAML tags like `languages` or `task_ids`\r\n- they contain valid config names (no invalid characters `<>:\/\\|?*.`)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4365\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4365\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4365","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4365","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4365.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4365.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4364","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4364\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4364\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4364\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4364","id":1238976106,"node_id":"PR_kwDODunzps43-bmq","number":4364,"title":"Support complex feature types as `features` in packaged loaders ","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652810003000,"updated_at":1653999983000,"closed_at":1653999392000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`\/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to support the string to int conversion in `table_cast` and ensure that integer labels are in a valid range.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4210\r\n\r\nThis PR is also a solution for these (popular) discussions: https:\/\/discuss.huggingface.co\/t\/converting-string-label-to-int\/2816 and https:\/\/discuss.huggingface.co\/t\/class-labels-for-custom-datasets\/15130\/2\r\n\r\nTODO:\r\n* [x] tests","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4364\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":1,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4364\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4364","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4364","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4364.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4364.patch","merged_at":1653999391000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4363","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4363\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4363\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4363\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4363","id":1238897652,"node_id":"I_kwDODunzps5J2BP0","number":4363,"title":"The dataset preview is not available for this split.","user":{"login":"roholazandie","id":7584674,"node_id":"MDQ6VXNlcjc1ODQ2NzQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7584674?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/roholazandie","html_url":"https:\/\/github.com\/roholazandie","followers_url":"https:\/\/api.github.com\/users\/roholazandie\/followers","following_url":"https:\/\/api.github.com\/users\/roholazandie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/roholazandie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/roholazandie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/roholazandie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/roholazandie\/orgs","repos_url":"https:\/\/api.github.com\/users\/roholazandie\/repos","events_url":"https:\/\/api.github.com\/users\/roholazandie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/roholazandie\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi! A dataset has to be streamable to work with the viewer. I did a quick test, and yours is, so this might be a bug in the viewer. cc @severo \r\n","Looking at it. The message is now:\r\n\r\n```\r\nMessage: cannot cache function '__shear_dense': no locator available for file '\/src\/services\/worker\/.venv\/lib\/python3.9\/site-packages\/librosa\/util\/utils.py'\r\n```\r\n\r\nso possibly it's related to the libraries versions?\r\n","Maybe this SO thread can help: https:\/\/stackoverflow.com\/questions\/59290386\/runtimeerror-at-cannot-cache-function-shear-dense-no-locator-available-fo","Same error for https:\/\/huggingface.co\/datasets\/LIUM\/tedlium\/viewer\/release1\/test. cc @sanchit-gandhi . I'm on it","Fixed in the datasets viewer, by setting the `NUMBA_CACHE_DIR` env var to a writable directory.","https:\/\/huggingface.co\/datasets\/Roh\/ryanspeech\/viewer\/male\/train\r\n\r\n\"Capture\r\n","https:\/\/huggingface.co\/datasets\/LIUM\/tedlium\/viewer\/\r\n\r\n\"Capture\r\n"],"created_at":1652805283000,"updated_at":1654691530000,"closed_at":1654680416000,"author_association":"NONE","active_lock_reason":null,"body":"I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https:\/\/huggingface.co\/datasets\/Roh\/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https:\/\/arxiv.org\/abs\/2106.08468). The dataset works fine but I can't make the dataset preview work. It gives me the following error that I don't understand. Can you help me to begin debugging it?\r\n\r\n```\r\nStatus code: 400\r\nException: AttributeError\r\nMessage: 'NoneType' object has no attribute 'split'\r\n``` ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4363\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4363\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4362","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4362\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4362\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4362\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4362","id":1238680112,"node_id":"PR_kwDODunzps439bkf","number":4362,"title":"Update dataset_infos for UDHN\/udhr dataset","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks for contributing @leondz.\r\n\r\nThe checksums of the files have changed because more languages have been added:\r\n- the new language codes need to be added to the dataset card (README file)\r\n- I think the dataset version number should also be increased, so that users who had previously cached it, get a new dataset download (with the additional languages)","Yep! All done (also fixed the language tags in the README which were iso639-3 instead of the expected bcp47)","I guess the language code CI failure is due to languages.json being a subset of bcp47 (see issue #4304), happy to contribute a solution here, e.g. autogeneration of the lang list from the relevant isos and the ietf bcp47 subtag register or full code for validation","> Thanks again for your contribution, @leondz.\r\n> \r\n> Yes, I think it is OK to set version 1.0.0 (as previous was 0.0.0).\r\n> \r\n> One of the CI failures is related to dummy data: once you have updated the dataset version, the dummy_data ZIP file should be moved from \"dummy\/0.0.0\/dummy_data.zip\" to \"dummy\/1.0.0\/dummy_data.zip\".\r\n\r\nOh, thanks, I missed that one\r\n\r\n\r\n> Other CI failure is related to missing languages in our resources file. This has been addressed in this PR:\r\n> \r\n> * #4371\r\n> \r\n> You should merge master branch into your feature branch to incorporate that fix.\r\n\r\nYeah, I saw this :) I already have the merge, thanks. I'm talking about the longer-term picture: every time another language code comes up (e.g. da-bornholm or es-VE), the json will need updating, because the current approach is non-exhaustive manual whitelisting instead of relying on the established bcp standard."],"created_at":1652795579000,"updated_at":1654716011000,"closed_at":1654715481000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Checksum update to `udhr` for issue #4361","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4362\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4362\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4362","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4362","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4362.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4362.patch","merged_at":1654715480000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4361","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4361\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4361\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4361\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4361","id":1238671931,"node_id":"I_kwDODunzps5J1KI7","number":4361,"title":"`udhr` doesn't load, dataset checksum mismatch","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652795229000,"updated_at":1654715481000,"closed_at":1654715481000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nLoading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed:\r\n\r\nsize + checksum in datasets repo:\r\n```\r\n(hfdev) leon@blade:~\/datasets\/datasets\/udhr$ jq .default.download_checksums < dataset_infos.json \r\n{\r\n \"https:\/\/unicode.org\/udhr\/assemblies\/udhr_xml.zip\": {\r\n \"num_bytes\": 2273633,\r\n \"checksum\": \"0565fa62c2ff155b84123198bcc967edd8c5eb9679eadc01e6fb44a5cf730fee\"\r\n },\r\n \"https:\/\/unicode.org\/udhr\/assemblies\/udhr_txt.zip\": {\r\n \"num_bytes\": 2107471,\r\n \"checksum\": \"087b474a070dd4096ae3028f9ee0b30dcdcb030cc85a1ca02e143be46327e5e5\"\r\n }\r\n}\r\n```\r\n\r\nsize + checksum regenerated from current source files:\r\n```\r\n(hfdev) leon@blade:~\/datasets\/datasets\/udhr$ rm dataset_infos.json\r\n(hfdev) leon@blade:~\/datasets\/datasets\/udhr$ datasets-cli test --save_infos udhr.py\r\nUsing custom data configuration default\r\nTesting builder 'default' (1\/1)\r\nDownloading and preparing dataset udhn\/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to \/home\/leon\/.cache\/huggingface\/datasets\/udhn\/default\/0.0.0\/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...\r\nDataset udhn downloaded and prepared to \/home\/leon\/.cache\/huggingface\/datasets\/udhn\/default\/0.0.0\/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 686.69it\/s]\r\nDataset Infos file saved at dataset_infos.json\r\nTest successful.\r\n(hfdev) leon@blade:~\/datasets\/datasets\/udhr$ jq .default.download_checksums < dataset_infos.json \r\n{\r\n \"https:\/\/unicode.org\/udhr\/assemblies\/udhr_xml.zip\": {\r\n \"num_bytes\": 2389690,\r\n \"checksum\": \"a3350912790196c6e1b26bfd1c8a50e8575f5cf185922ecd9bd15713d7d21438\"\r\n },\r\n \"https:\/\/unicode.org\/udhr\/assemblies\/udhr_txt.zip\": {\r\n \"num_bytes\": 2215441,\r\n \"checksum\": \"cb87ecb25b56f34e4fd6f22b323000524fd9c06ae2a29f122b048789cf17e9fe\"\r\n }\r\n}\r\n(hfdev) leon@blade:~\/datasets\/datasets\/udhr$ \r\n\r\n```\r\n\r\n\r\n--- is unicode.org a sustainable hosting solution for this dataset?\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nudhr = load_dataset(\"udhr\")\r\n```\r\n\r\n## Expected results\r\nThat a Dataset object containing the UDHR data will be returned.\r\n\r\n## Actual results\r\n```\r\n>>> d = load_dataset('udhr')\r\nUsing custom data configuration default\r\nDownloading and preparing dataset udhn\/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to \/home\/leon\/.cache\/huggingface\/datasets\/udhn\/default\/0.0.0\/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/leon\/.local\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1731, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/leon\/.local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 613, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/leon\/.local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1117, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"\/home\/leon\/.local\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 684, in _download_and_prepare\r\n verify_checksums(\r\n File \"\/home\/leon\/.local\/lib\/python3.9\/site-packages\/datasets\/utils\/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/unicode.org\/udhr\/assemblies\/udhr_xml.zip', 'https:\/\/unicode.org\/udhr\/assemblies\/udhr_txt.zip']\r\n>>> \r\n```\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.2.1 commit\/4110fb6034f79c5fb470cf1043ff52180e9c63b7\r\n- Platform: Linux Ubuntu 20.04\r\n- Python version: 3.9.12\r\n- PyArrow version: 8.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4361\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4361\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4360","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4360\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4360\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4360\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4360","id":1237239096,"node_id":"PR_kwDODunzps434izs","number":4360,"title":"Fix example in opus_ubuntu, Add license info","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["CI seems to fail due to languages incorrectly being flagged as invalid, I guess that's related to the currently-broken bcp47 validation (see #4304)","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652710948000,"updated_at":1654088767000,"closed_at":1654088229000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR \r\n* fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu`\r\n* adds the declared license info for this corpus' origin\r\n* adds an example instance\r\n* updates the data origin type","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4360\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4360\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4360","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4360","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4360.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4360.patch","merged_at":1654088229000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4359","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4359\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4359\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4359\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4359","id":1237149578,"node_id":"PR_kwDODunzps434Pb6","number":4359,"title":"Fix Version equality","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652707166000,"updated_at":1653409537000,"closed_at":1653409034000,"author_association":"MEMBER","active_lock_reason":null,"body":"I think `Version` equality should align with other similar cases in Python, like:\r\n```python\r\nIn [1]: \"a\" == 5, \"a\" == None\r\nOut[1]: (False, False)\r\n\r\nIn [2]: \"a\" != 5, \"a\" != None\r\nOut[2]: (True, True)\r\n```\r\n\r\nWith this PR, we will get:\r\n```python\r\nIn [3]: Version(\"1.0.0\") == 5, Version(\"1.0.0\") == None\r\nOut[3]: (False, False)\r\n\r\nIn [4]: Version(\"1.0.0\") != 5, Version(\"1.0.0\") != None\r\nOut[4]: (True, True)\r\n```\r\n\r\nNote I found this issue when `doc-builder` tried to compare:\r\n```python\r\nif param.default != inspect._empty\r\n```\r\nwhere `param.default` is an instance of `Version`.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4359\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4359\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4359","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4359","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4359.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4359.patch","merged_at":1653409034000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4358","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4358\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4358\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4358\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4358","id":1237147692,"node_id":"I_kwDODunzps5JvWAs","number":4358,"title":"Missing dataset tags and sections in some dataset cards","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I can take this issue. Please can you point out to me where I can find the other positional arguments?","Hi @RohitRathore1 :)\r\n\r\nYou can find all the YAML tags in the tagging app here: https:\/\/hf.co\/spaces\/huggingface\/datasets-tagging). They're all passed as arguments to a DatasetMetadata object used to validate the tags."],"created_at":1652707096000,"updated_at":1653925012000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Summary of CircleCI errors for different dataset metadata:\r\n\r\n- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **Conllpp**: expected some content in section `Citation Information` but it is empty.\r\n- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags\r\n- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'\r\n- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty\r\n- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.\r\n- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.\r\n- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4358\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4358\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4357","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4357\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4357\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4357\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4357","id":1237037069,"node_id":"PR_kwDODunzps4333b9","number":4357,"title":"Fix warning in push_to_hub","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652701817000,"updated_at":1652714329000,"closed_at":1652713841000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix warning:\r\n```\r\nFutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4357\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4357\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4357","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4357","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4357.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4357.patch","merged_at":1652713841000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4356","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4356\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4356\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4356\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4356","id":1236846308,"node_id":"PR_kwDODunzps433OsB","number":4356,"title":"Fix dataset builder default version","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","This PR requires one of these other PRs being merged first:\r\n- #4359 \r\n- huggingface\/doc-builder#211"],"created_at":1652691910000,"updated_at":1653919018000,"closed_at":1653918474000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class.\r\n\r\nHowever, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the custom config is ignored and \"0.0.0\" is used instead:\r\n```python\r\nds = load_dataset(\"wikipedia\", language=\"co\", date=\"20220501\", beam_runner=\"DirectRunner\")\r\n```\r\ngenerates the following config:\r\n```python\r\nWikipediaConfig(name='20220501.co', version=0.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for co, parsed from 20220501 dump.')\r\n```\r\nwith version \"0.0.0\" instead of \"2.0.0\".\r\n\r\nSee as a counter-example, when the config is present in `BUILDER_CONFIGS`:\r\n```python\r\nds = load_dataset(\"wikipedia\", \"20220301.fr\", beam_runner=\"DirectRunner\")\r\n```\r\ngenerates the following config:\r\n```python\r\nWikipediaConfig(name='20220301.fr', version=2.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for fr, parsed from 20220301 dump.')\r\n```\r\nwith correct version \"2.0.0\", as set in the custom config class.\r\n\r\n\r\nThe reason for this is that `DatasetBuilder` has a default VERSION (\"0.0.0\") that overwrites the default version set at the custom config class.\r\n\r\nThis PR:\r\n- Removes the default VERSION at `DatasetBuilder` (set to None, so that the class attribute exists but it does not override the custom config default version).\r\n- Note that the `BuilderConfig` class already sets a default version = \"0.0.0\"; no need to pass this from the builder.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4356\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4356\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4356","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4356","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4356.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4356.patch","merged_at":1653918474000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4355","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4355\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4355\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4355\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4355","id":1236797490,"node_id":"PR_kwDODunzps433EgP","number":4355,"title":"Fix warning in upload_file","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652689291000,"updated_at":1652700482000,"closed_at":1652699997000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix warning:\r\n```\r\nFutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4355\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4355\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4355","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4355","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4355.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4355.patch","merged_at":1652699997000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4354","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4354\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4354\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4354\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4354","id":1236404383,"node_id":"I_kwDODunzps5Jsgif","number":4354,"title":"Problems with WMT dataset","user":{"login":"eldarkurtic","id":8884008,"node_id":"MDQ6VXNlcjg4ODQwMDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8884008?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eldarkurtic","html_url":"https:\/\/github.com\/eldarkurtic","followers_url":"https:\/\/api.github.com\/users\/eldarkurtic\/followers","following_url":"https:\/\/api.github.com\/users\/eldarkurtic\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eldarkurtic\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eldarkurtic\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eldarkurtic\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eldarkurtic\/orgs","repos_url":"https:\/\/api.github.com\/users\/eldarkurtic\/repos","events_url":"https:\/\/api.github.com\/users\/eldarkurtic\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eldarkurtic\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi! Yes, the docs are outdated. Expect this to be fixed soon. \r\n\r\nIn the meantime, you can try to fix the issue yourself.\r\n\r\nThese are the configs\/language pairs supported by `wmt15` from which you can choose:\r\n* `cs-en` (Czech - English)\r\n* `de-en` (German - English)\r\n* `fi-en` (Finnish- English)\r\n* `fr-en` (French - English)\r\n* `ru-en` (Russian - English)\r\n\r\nAnd the current implementation always uses all the subsets available for a language, so to define custom subsets, you'll have to clone the repo from the Hub and replace the line https:\/\/huggingface.co\/datasets\/wmt15\/blob\/main\/wmt_utils.py#L688 with:\r\n`for split, ss_names in (self._subsets if self.config.subsets is None else self.config.subsets).items()`\r\n\r\nThen, you can load the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path\/to\/local\/wmt15_folder\", \"\", subsets=...)","@mariosasko thanks a lot for the suggested fix! ","Hi @mariosasko \r\n\r\nAre the docs updated? If not, I would like to get on it. I am new around here, would we helpful, if you can guide.\r\n\r\nThanks","Hi @khushmeeet! The docs haven't been updated, so feel free to work on this issue. This is a tricky issue, so I'll give the steps you can follow to fix this:\r\n\r\nFirst, this code:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7cff5b9726a223509dbd6224de3f5f452c8d924f\/src\/datasets\/load.py#L113-L118\r\n\r\nneeds to be replaced with (makes the dataset builder search more robust and allows us to remove the ABC stuff from `wmt_utils.py`):\r\n```python\r\n for name, obj in module.__dict__.items():\r\n if inspect.isclass(obj) and issubclass(obj, main_cls_type):\r\n if inspect.isabstract(obj):\r\n continue\r\n module_main_cls = obj\r\n obj_module = inspect.getmodule(obj)\r\n if obj_module is not None and module == obj_module:\r\n break\r\n```\r\n\r\nThen, all the `wmt_utils.py` scripts need to be updated as follows (these are the diffs with the requiered changes):\r\n````diff\r\n import os\r\n import re\r\n import xml.etree.cElementTree as ElementTree\r\n-from abc import ABC, abstractmethod\r\n\r\n import datasets\r\n````\r\n\r\n````diff\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n _DESCRIPTION = \"\"\"\\\r\n-Translate dataset based on the data from statmt.org.\r\n+Translation dataset based on the data from statmt.org.\r\n\r\n-Versions exists for the different years using a combination of multiple data\r\n-sources. The base `wmt_translate` allows you to create your own config to choose\r\n-your own data\/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\r\n+Versions exist for different years using a combination of data\r\n+sources. The base `wmt` allows you to create a custom dataset by choosing\r\n+your own data\/language pair. This can be done as follows:\r\n\r\n ```\r\n-config = datasets.wmt.WmtConfig(\r\n- version=\"0.0.1\",\r\n+from datasets import inspect_dataset, load_dataset_builder\r\n+\r\n+inspect_dataset(\">> import datasets\r\n>>> a = datasets.translate.wmt.WmtConfig()\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\nAttributeError: module 'datasets' has no attribute 'translate'\r\n>>> a = datasets.wmt.WmtConfig()\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\nAttributeError: module 'datasets' has no attribute 'wmt'\r\n```\r\n\r\n## Expected results\r\nTo load WMT15 with given data-sources.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.0.0\r\n- Platform: Linux-5.10.0-10-amd64-x86_64-with-glibc2.17\r\n- Python version: 3.8.12\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4354\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4354\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4353","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4353\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4353\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4353\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4353","id":1236092176,"node_id":"PR_kwDODunzps43016x","number":4353,"title":"Don't strip proceeding hyphen","user":{"login":"JohnGiorgi","id":8917831,"node_id":"MDQ6VXNlcjg5MTc4MzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8917831?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JohnGiorgi","html_url":"https:\/\/github.com\/JohnGiorgi","followers_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/followers","following_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/orgs","repos_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/repos","events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652552729000,"updated_at":1652727098000,"closed_at":1652709131000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Closes #4320.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4353\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4353\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4353","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4353","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4353.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4353.patch","merged_at":1652709130000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4352","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4352\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4352\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4352\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4352","id":1236086170,"node_id":"I_kwDODunzps5JrS2a","number":4352,"title":"When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way","user":{"login":"plamb-viso","id":99206017,"node_id":"U_kgDOBenDgQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/99206017?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/plamb-viso","html_url":"https:\/\/github.com\/plamb-viso","followers_url":"https:\/\/api.github.com\/users\/plamb-viso\/followers","following_url":"https:\/\/api.github.com\/users\/plamb-viso\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/plamb-viso\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/plamb-viso\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/plamb-viso\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/plamb-viso\/orgs","repos_url":"https:\/\/api.github.com\/users\/plamb-viso\/repos","events_url":"https:\/\/api.github.com\/users\/plamb-viso\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/plamb-viso\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting :) `datasets` usually returns a `pa.lib.ArrowInvalid` error if the feature types don't match.\r\n\r\nIt would be awesome if we had a way to reproduce the `OverflowError` in this case, to better understand what happened and be able to provide the best error message"],"created_at":1652550915000,"updated_at":1652713757000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nRecently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https:\/\/github.com\/huggingface\/datasets\/issues\/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on.\r\n\r\nIt seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me.\r\n\r\n## Steps to reproduce the bug\r\nI don't have explicit code to repro the bug, but ill show an example\r\n\r\nCode prior to the fix:\r\n```python\r\ndef preprocess(examples):\r\n # returns an encoded data dict with keys that match the features, but the types do not match\r\n...\r\n\r\ndef get_encoded_data(data):\r\n dataset = Dataset.from_pandas(data)\r\n unique_labels = data['audit_type'].unique().tolist()\r\n features = Features({\r\n 'image': Array3D(dtype=\"uint8\", shape=(3, 224, 224))),\r\n 'input_ids': Sequence(feature=Value(dtype='int64'))),\r\n 'attention_mask': Sequence(Value(dtype='int64'))),\r\n 'token_type_ids': Sequence(Value(dtype='int64'))),\r\n 'bbox': Array2D(dtype=\"int64\", shape=(512, 4))),\r\n 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),\r\n })\r\n\r\n encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names)\r\n```\r\n\r\nThe Features set that fixed it:\r\n```python\r\n features = Features({\r\n 'image': Sequence(Array3D(dtype=\"uint8\", shape=(3, 224, 224))),\r\n 'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))),\r\n 'attention_mask': Sequence(Sequence(Value(dtype='int64'))),\r\n 'token_type_ids': Sequence(Sequence(Value(dtype='int64'))),\r\n 'bbox': Sequence(Array2D(dtype=\"int64\", shape=(512, 4))),\r\n 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),\r\n })\r\n```\r\nThe difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4\/5 features as I am working with paginated data and the doc examples are not.\r\n\r\n## Expected results\r\nDataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\nBased on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious\r\n\r\nExample errors:\r\n```\r\nOverflowError: There was an overflow with type . Try to reduce writer_batch_size to have batches smaller than 2GB.\r\n(offset overflow while concatenating arrays)\r\n```\r\n\r\n```\r\nzsh: killed python doc_classification.py\r\n\r\nUserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\r\n```\r\n\r\n## Environment info\r\n\r\ndatasets version: 2.1.0\r\nPlatform: macOS-12.2.1-arm64-arm-64bit\r\nPython version: 3.9.12\r\nPyArrow version: 6.0.1\r\nPandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4352\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4352\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4351","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4351\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4351\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4351\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4351","id":1235950209,"node_id":"I_kwDODunzps5JqxqB","number":4351,"title":"Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems","user":{"login":"Rexhaif","id":5154447,"node_id":"MDQ6VXNlcjUxNTQ0NDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5154447?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rexhaif","html_url":"https:\/\/github.com\/Rexhaif","followers_url":"https:\/\/api.github.com\/users\/Rexhaif\/followers","following_url":"https:\/\/api.github.com\/users\/Rexhaif\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rexhaif\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rexhaif\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rexhaif\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rexhaif\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rexhaif\/repos","events_url":"https:\/\/api.github.com\/users\/Rexhaif\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rexhaif\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https:\/\/github.com\/huggingface\/datasets\/issues\/4196)."],"created_at":1652527842000,"updated_at":1652882386000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nWhen working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence.\r\n\r\n**Describe the solution you'd like**\r\nI want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent\/received or amount of records written\/loaded, and will give some ETA. Basically just tqdm. \r\n\r\n**Describe alternatives you've considered**\r\n- Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https:\/\/alexwlchan.net\/2021\/04\/s3-progress-bars\/).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4351\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4351\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4350","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4350\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4350\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4350\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4350","id":1235505104,"node_id":"PR_kwDODunzps43zKIV","number":4350,"title":"Add a new metric: CTC_Consistency","user":{"login":"YEdenZ","id":92551194,"node_id":"U_kgDOBYQ4Gg","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/92551194?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YEdenZ","html_url":"https:\/\/github.com\/YEdenZ","followers_url":"https:\/\/api.github.com\/users\/YEdenZ\/followers","following_url":"https:\/\/api.github.com\/users\/YEdenZ\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YEdenZ\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YEdenZ\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YEdenZ\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YEdenZ\/orgs","repos_url":"https:\/\/api.github.com\/users\/YEdenZ\/repos","events_url":"https:\/\/api.github.com\/users\/YEdenZ\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YEdenZ\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for your contribution, @YEdenZ.\r\n\r\nPlease note that our old `metrics` module is in the process of being incorporated to a separate library called `evaluate`: https:\/\/github.com\/huggingface\/evaluate\r\n\r\nTherefore, I would ask you to transfer your PR to that repository. Thank you."],"created_at":1652463079000,"updated_at":1652955784000,"closed_at":1652955783000,"author_association":"NONE","active_lock_reason":null,"body":"Add CTC_Consistency metric\r\nDo I also need to modify the `test_metric_common.py` file to make it run on test?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4350\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4350\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4350","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4350","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4350.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4350.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4349","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4349\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4349\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4349\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4349","id":1235474765,"node_id":"I_kwDODunzps5Jo9lN","number":4349,"title":"Dataset.map()'s fails at any value of parameter writer_batch_size ","user":{"login":"plamb-viso","id":99206017,"node_id":"U_kgDOBenDgQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/99206017?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/plamb-viso","html_url":"https:\/\/github.com\/plamb-viso","followers_url":"https:\/\/api.github.com\/users\/plamb-viso\/followers","following_url":"https:\/\/api.github.com\/users\/plamb-viso\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/plamb-viso\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/plamb-viso\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/plamb-viso\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/plamb-viso\/orgs","repos_url":"https:\/\/api.github.com\/users\/plamb-viso\/repos","events_url":"https:\/\/api.github.com\/users\/plamb-viso\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/plamb-viso\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft\/layoutlmv2-base-uncased\")\r\n processor = LayoutLMv2Processor(feature_extractor, tokenizer)\r\n encoded_inputs = processor(images, padding=\"max_length\", truncation=True)\r\n encoded_inputs[\"image\"] = np.array(encoded_inputs[\"image\"])\r\n encoded_inputs[\"label\"] = examples['label_id']\r\n```","Wanted to make sure anyone that finds this also finds my other report: https:\/\/github.com\/huggingface\/datasets\/issues\/4352","Did you close it because you found that it was due to the incorrect Feature types ?","Yeah-- my analysis of the issue was wrong in this one so I just closed it while linking to the new issue","I met with the same problem when doing some experiments about layoutlm. I tried to set the writer_batch_size to 1, and the error still exists. Is there any solutions to this problem?","The problem lies in how your Features are defined. It's erroring out when it actually goes to write them to disk"],"created_at":1652460912000,"updated_at":1654174271000,"closed_at":1652540888000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nIf the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance.\r\n\r\nContext:\r\nI am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https:\/\/huggingface.co\/docs\/transformers\/model_doc\/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words\/bounding boxes. The other option is to provide `revision=\"no_ocr\"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug.\r\n\r\nI am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages.\r\nCode I am using is provided below\r\n\r\n## Steps to reproduce the bug\r\nI do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents.\r\n```python\r\ndef get_encoded_data(data):\r\n dataset = Dataset.from_pandas(data)\r\n unique_labels = data['label'].unique()\r\n features = Features({\r\n 'image': Array3D(dtype=\"int64\", shape=(3, 224, 224)),\r\n 'input_ids': Sequence(feature=Value(dtype='int64')),\r\n 'attention_mask': Sequence(Value(dtype='int64')),\r\n 'token_type_ids': Sequence(Value(dtype='int64')),\r\n 'bbox': Array2D(dtype=\"int64\", shape=(512, 4)),\r\n 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),\r\n })\r\n\r\n encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1)\r\n encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME)\r\n encoded_dataset.set_format(type=\"torch\")\r\n return encoded_dataset\r\n```\r\n```python\r\nPROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision=\"no_ocr\", use_fast=False)\r\n\r\ndef preprocess_data(examples):\r\n directory = os.path.join(FILES_PATH, examples['file_location'])\r\n images_dir = os.path.join(directory, PDF_IMAGE_DIR)\r\n textract_response_path = os.path.join(directory, 'textract.json')\r\n doc_meta_path = os.path.join(directory, 'doc_meta.json')\r\n textract_document = get_textract_document(textract_response_path, doc_meta_path)\r\n images, words, bboxes = get_doc_training_data(images_dir, textract_document)\r\n encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding=\"max_length\", truncation=True)\r\n # https:\/\/github.com\/NielsRogge\/Transformers-Tutorials\/issues\/36\r\n encoded_inputs[\"image\"] = np.array(encoded_inputs[\"image\"])\r\n encoded_inputs[\"label\"] = examples['label_id']\r\n return encoded_inputs\r\n```\r\n\r\n## Expected results\r\nMy expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly.\r\n\r\n## Actual results\r\nIf writer_batch_size is set to a value less than the number of rows, I get either:\r\n\r\n```\r\nOverflowError: There was an overflow with type . Try to reduce writer_batch_size to have batches smaller than 2GB.\r\n(offset overflow while concatenating arrays)\r\n```\r\nor simply\r\n\r\n```\r\nzsh: killed python doc_classification.py\r\n\r\nUserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\r\n```\r\n\r\nIf it is greater than the number of rows, i get the `zsh: killed` error above\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.1.0\r\n- Platform: macOS-12.2.1-arm64-arm-64bit\r\n- Python version: 3.9.12\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4349\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4349\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4348","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4348\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4348\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4348\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4348","id":1235432976,"node_id":"I_kwDODunzps5JozYQ","number":4348,"title":"`inspect` functions can't fetch dataset script from the Hub","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https:\/\/github.com\/huggingface\/datasets\/blob\/cfae0545b2ba05452e16136cacc7d370b4b186a1\/src\/datasets\/inspect.py#L89-L91\r\n\r\ncc @lhoestq: `force_local_path` is only used in `inspect_dataset` and `inspect_metric`. Is it OK if we revert the behavior to match the old one?","Good catch ! Yea I think it's fine :)"],"created_at":1652458106000,"updated_at":1654770366000,"closed_at":1654770366000,"author_association":"MEMBER","active_lock_reason":null,"body":"The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:\r\n\r\n```py\r\n>>> from datasets import inspect_dataset\r\n>>> inspect_dataset('rotten_tomatoes', local_path='path\/to\/my\/local\/folder')\r\n\r\nFileNotFoundError: Couldn't find a dataset script at \/content\/rotten_tomatoes\/rotten_tomatoes.py or any data file in the same directory.\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4348\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4348\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4347","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4347\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4347\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4347\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4347","id":1235318064,"node_id":"PR_kwDODunzps43yihq","number":4347,"title":"Support remote cache_dir","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq thanks for your review.\r\n\r\nPlease note that `xjoin` cannot be used in this context, as it always returns a POSIX path string and this is not suitable on Windows machines.","`xjoin` returns windows paths (not posix) on windows, since it just extends`os.path.join` <\/s>\r\n\r\nActually you are right.\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/08ec04ccb59630a3029b2ecd8a14d327bddd0c4a\/src\/datasets\/utils\/streaming_download_manager.py#L104-L105\r\n\r\nThough this is not an issue because posix paths (as returned by Path().as_posix()) work on windows. That's why we can replace `os.path.join` with `xjoin` in streaming mode. They look like `c:\/Program Files\/` or something (can't confirm right now, I don't have a windows with me)","Until now, we have always replaced \"\/\" in paths with `os.path.join` (`os.sep`,...) in order to support Windows paths (that contain r\"\\\\\").\r\n\r\nNow, you suggest ignoring this and work with POSIX strings (with \"\/\").\r\n\r\nAs an example, when passing `cache_dir=r\"C:\\Users\\Username\\.mycache\"`:\r\n- Until now, it results in `self._cache_downloaded_dir = r\"C:\\Users\\Username\\.mycache\\downloads\"`\r\n- If we use `xjoin`, it will give `self._cache_downloaded_dir = \"C:\/Users\/Username\/.mycache\/downloads\"`\r\n\r\nYou say this is OK and we don't care if we work with POSIX strings on Windows machines.\r\n\r\nI'm incorporating your suggested changes then...","Also note that using `xjoin`, if we pass `cache_dir=\"C:\\\\Users\\\\Username\\\\.mycache\"`, we get:\r\n- `self._cache_dir_root = \"C:\\\\Users\\\\Username\\\\.mycache\"`\r\n- `self._cache_downloaded_dir = \"C:\/Users\/Username\/.mycache\/downloads\"`","It looks like it broke the CI on windows :\/ maybe this was not a good idea, sorry"],"created_at":1652451995000,"updated_at":1653496523000,"closed_at":1653496023000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR implements complete support for remote `cache_dir`. Before, the support was just partial.\r\n\r\nThis is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4347\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4347\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4347","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4347","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4347.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4347.patch","merged_at":1653496023000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4346","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4346\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4346\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4346\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4346","id":1235067062,"node_id":"I_kwDODunzps5JnaC2","number":4346,"title":"GH Action to build documentation never ends","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652438684000,"updated_at":1652440920000,"closed_at":1652440920000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nSee: https:\/\/github.com\/huggingface\/datasets\/runs\/6418035586?check_suite_focus=true\r\n\r\nI finally forced the cancel of the workflow.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4346\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4346\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4345","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4345\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4345\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4345\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4345","id":1235062787,"node_id":"PR_kwDODunzps43xrky","number":4345,"title":"Fix never ending GH Action to build documentation","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652438410000,"updated_at":1652441383000,"closed_at":1652440920000,"author_association":"MEMBER","active_lock_reason":null,"body":"There was an unclosed code block introduced by:\r\n- #4313 \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/pull\/4313\/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538 \r\n\r\nThis causes the \"Make documentation\" step in the \"Build documentation\" workflow to never finish.\r\n- I think this issue should also be addressed in the `doc-builder` lib.\r\n\r\n\r\nFix #4346.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4345\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4345\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4345","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4345","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4345.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4345.patch","merged_at":1652440920000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4344","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4344\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4344\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4344\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4344","id":1234882542,"node_id":"PR_kwDODunzps43xFEn","number":4344,"title":"Fix docstring in DatasetDict::shuffle","user":{"login":"felixdivo","id":4403130,"node_id":"MDQ6VXNlcjQ0MDMxMzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4403130?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/felixdivo","html_url":"https:\/\/github.com\/felixdivo","followers_url":"https:\/\/api.github.com\/users\/felixdivo\/followers","following_url":"https:\/\/api.github.com\/users\/felixdivo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/felixdivo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/felixdivo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/felixdivo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/felixdivo\/orgs","repos_url":"https:\/\/api.github.com\/users\/felixdivo\/repos","events_url":"https:\/\/api.github.com\/users\/felixdivo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/felixdivo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652429160000,"updated_at":1653470623000,"closed_at":1653406521000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I think due to #1626, the docstring contained this error ever since `seed` was added.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4344\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4344\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4344","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4344","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4344.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4344.patch","merged_at":1653406521000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4343","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4343\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4343\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4343\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4343","id":1234864168,"node_id":"I_kwDODunzps5Jmogo","number":4343,"title":"Metrics documentation is not accessible in the datasets doc UI","user":{"login":"fxmarty","id":9808326,"node_id":"MDQ6VXNlcjk4MDgzMjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9808326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fxmarty","html_url":"https:\/\/github.com\/fxmarty","followers_url":"https:\/\/api.github.com\/users\/fxmarty\/followers","following_url":"https:\/\/api.github.com\/users\/fxmarty\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fxmarty\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fxmarty\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fxmarty\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fxmarty\/orgs","repos_url":"https:\/\/api.github.com\/users\/fxmarty\/repos","events_url":"https:\/\/api.github.com\/users\/fxmarty\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fxmarty\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400959,"node_id":"MDU6TGFiZWwyMDY3NDAwOTU5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Metric%20discussion","name":"Metric discussion","color":"d722e8","default":false,"description":"Discussions on the metrics"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey @fxmarty :) Yes we are working on showing the docs of all the metrics on the Hugging face website. If you want to follow the advancements you can check the [evaluate](https:\/\/github.com\/huggingface\/evaluate) repository cc @lvwerra @sashavor "],"created_at":1652427990000,"updated_at":1654246225000,"closed_at":1654246225000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nSearch for a metric name like \"seqeval\" yields no results on https:\/\/huggingface.co\/docs\/datasets\/master\/en\/index . One needs to go look in `datasets\/metrics\/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https:\/\/github.com\/huggingface\/datasets\/blob\/1a4c185663a6958f48ec69624473fdc154a36a9d\/metrics\/squad\/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects.\r\n\r\n**Describe the solution you'd like**\r\nHave the documentation for metrics appear as well in the doc UI, e.g. this https:\/\/github.com\/huggingface\/datasets\/blob\/1a4c185663a6958f48ec69624473fdc154a36a9d\/metrics\/squad\/squad.py#L21-L63\r\n\r\nI know there are plans to migrate metrics to the evaluate library, but just pointing this out.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4343\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4343\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4342","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4342\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4342\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4342\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4342","id":1234743765,"node_id":"PR_kwDODunzps43woHm","number":4342,"title":"Fix failing CI on Windows for sari and wiki_split metrics","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652418218000,"updated_at":1652420862000,"closed_at":1652420862000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics).\r\n\r\nBefore, this library was installed as a third-party dependency, but this is no longer the case for Windows.\r\n\r\nFix #4341.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4342\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4342\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4342","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4342","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4342.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4342.patch","merged_at":1652420861000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4341","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4341\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4341\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4341\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4341","id":1234739703,"node_id":"I_kwDODunzps5JmKH3","number":4341,"title":"Failing CI on Windows for sari and wiki_split metrics","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1652417717000,"updated_at":1652420861000,"closed_at":1652420861000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nOur CI is failing from yesterday on Windows for metrics: sari and wiki_split\r\n```\r\nFAILED tests\/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...\r\nFAILED tests\/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split\r\n```\r\n\r\nSee: https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/11928\/workflows\/79daa5e7-65c9-4e85-829b-00d2bfbd076a\/jobs\/71594","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4341\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4341\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4340","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4340\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4340\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4340\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4340","id":1234671025,"node_id":"PR_kwDODunzps43wY1U","number":4340,"title":"Fix irc_disentangle dataset script","user":{"login":"i-am-pad","id":32005017,"node_id":"MDQ6VXNlcjMyMDA1MDE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32005017?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/i-am-pad","html_url":"https:\/\/github.com\/i-am-pad","followers_url":"https:\/\/api.github.com\/users\/i-am-pad\/followers","following_url":"https:\/\/api.github.com\/users\/i-am-pad\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/i-am-pad\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/i-am-pad\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/i-am-pad\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/i-am-pad\/orgs","repos_url":"https:\/\/api.github.com\/users\/i-am-pad\/repos","events_url":"https:\/\/api.github.com\/users\/i-am-pad\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/i-am-pad\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks ! This has been fixed in https:\/\/github.com\/huggingface\/datasets\/pull\/4377, we can close this PR"],"created_at":1652409477000,"updated_at":1653406650000,"closed_at":1653406649000,"author_association":"NONE","active_lock_reason":null,"body":"updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4340\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4340\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4340","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4340","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4340.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4340.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4339","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4339\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4339\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4339\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4339","id":1234496289,"node_id":"PR_kwDODunzps43v0WT","number":4339,"title":"Dataset loader for the MSLR2022 shared task","user":{"login":"JohnGiorgi","id":8917831,"node_id":"MDQ6VXNlcjg5MTc4MzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8917831?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JohnGiorgi","html_url":"https:\/\/github.com\/JohnGiorgi","followers_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/followers","following_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/orgs","repos_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/repos","events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think the underlying issue is in https:\/\/github.com\/huggingface\/datasets\/blob\/c0ed6fdc29675b3565b01b77fde5ab5d9d8b60ec\/src\/datasets\/commands\/dummy_data.py#L124 - where `CSV`s are considered to be in the same class of file as text, jsonl, and tsv.\r\n\r\nI think this is an error because CSVs can have newlines within the rows of a file. I'm happy to make a PR to change how this handling works, or make the change within this PR. \r\n\r\nWe should figure out:\r\n1. Does this dummy data need to be generated more than once? (It looks like no)\r\n2. Should this be fixed generally? (needs a HF person to weigh in here)\r\n3. What is the right way for such a fix to exist permanently here; the [Contributing document](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/CONTRIBUTING.md) doesn't provide guidance on any tests. Writing a test is several times more effort than fixing the underlying issue. (again needs a HF person)","Would someone from HF mind taking a look at this PR? (@lhoestq)","Hi ! Sorry for the delay in responding :)\r\n\r\nI don't think there's a big need to fix this in the general case for now, feel free to just generate the dummy data for this specific dataset :)\r\n\r\nThe `datasets-cli dummy_data datasets\/mslr2022` command should tell you what dummy files to generate. In each dummy file you just need to include enough data to generate 4 or 5 examples","_The documentation is not available anymore as the PR was closed or merged._","Awesome! Generated the dummy data and the tests now pass. @jayded thanks for your help! If you and @lucylw are happy with this I think it's ready to be merged. @lhoestq this is ready for another look :)","Hi @lhoestq, is there anything blocking this from being merged that I can address?","Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n\r\nI think this dataset can be under the AllenAI page here: https:\/\/huggingface.co\/allenai What do you think ?\r\nFeel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n\r\nOnce the dataset is under the AllenAI org, we can close this PR\r\n","> Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n> \r\n> I think this dataset can be under the AllenAI page here: https:\/\/huggingface.co\/allenai What do you think ? Feel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n> \r\n> Once the dataset is under the AllenAI org, we can close this PR\r\n\r\nSweet! It is uploaded here: https:\/\/huggingface.co\/datasets\/allenai\/mslr2022","Nice ! Thanks :)\r\n\r\nI think we can close this PR then.\r\n\r\nI noticed that the dataset preview is not available on this dataset, this is because we require datasets to work in streaming mode to show a preview. However TAR archives don't work well in streaming mode (you can't know in advance what files are inside a TAR archive without reading it completely). This can be fixed by using a ZIP archive instead.\r\n\r\nLet me know if you have questions or if I can help."],"created_at":1652390621000,"updated_at":1658164767000,"closed_at":1658163514000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds a dataset loader for the [MSLR2022 Shared Task](https:\/\/github.com\/allenai\/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nms2 = load_dataset(\"mslr2022\", \"ms2\")\r\ncochrane = load_dataset(\"mslr2022\", \"cochrane\")\r\n```\r\n\r\nUsage looks like:\r\n\r\n```python\r\n>>> ms2 = load_dataset(\"mslr2022\", \"ms2\", split=\"validation\")\r\n>>> ms2.keys()\r\ndict_keys(['review_id', 'pmid', 'title', 'abstract', 'target', 'background', 'reviews_info'])\r\n>>> ms2[0].target\r\n'Conclusions SC therapy is effective for PAH in pre clinical studies .\\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .'\r\n```\r\n\r\nI have tested this works with the following command:\r\n\r\n```bash\r\ndatasets-cli test datasets\/mslr2022 --save_infos --all_configs\r\n```\r\n\r\nHowever I have having a little trouble generating the dummy data\r\n\r\n```bash\r\ndatasets-cli dummy_data datasets\/mslr2022 --auto_generate\r\n```\r\n\r\nerrors out with the following stack trace:\r\n\r\n```\r\nCouldn't generate dummy file 'datasets\/mslr2022\/dummy\/ms2\/1.0.0\/dummy_data\/mslr_data.tar.gz\/mslr_data\/ms2\/convert_to_cochrane.py'. Ignore that if this file is not useful for dummy data.\r\nTraceback (most recent call last): \r\n File \"\/Users\/johngiorgi\/.pyenv\/versions\/datasets\/bin\/datasets-cli\", line 11, in \r\n load_entry_point('datasets', 'console_scripts', 'datasets-cli')()\r\n File \"\/Users\/johngiorgi\/Documents\/dev\/datasets\/src\/datasets\/commands\/datasets_cli.py\", line 39, in main\r\n service.run()\r\n File \"\/Users\/johngiorgi\/Documents\/dev\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 319, in run\r\n keep_uncompressed=self._keep_uncompressed,\r\n File \"\/Users\/johngiorgi\/Documents\/dev\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 361, in _autogenerate_dummy_data\r\n dataset_builder._prepare_split(split_generator, check_duplicate_keys=False)\r\n File \"\/Users\/johngiorgi\/Documents\/dev\/datasets\/src\/datasets\/builder.py\", line 1146, in _prepare_split\r\n desc=f\"Generating {split_info.name} split\",\r\n File \"\/Users\/johngiorgi\/.pyenv\/versions\/3.7.13\/envs\/datasets\/lib\/python3.7\/site-packages\/tqdm\/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"\/Users\/johngiorgi\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/mslr2022\/b4becd2f52cf18255d4934d7154c2a1127fb393371b87b3c1fc2c8b35a777cea\/mslr2022.py\", line 149, in _generate_examples\r\n reviews_info_df = pd.read_csv(reviews_info_filepath, index_col=0)\r\n File \"\/Users\/johngiorgi\/.pyenv\/versions\/3.7.13\/envs\/datasets\/lib\/python3.7\/site-packages\/pandas\/util\/_decorators.py\", line 311, in wrapper\r\n return func(*args, **kwargs)\r\n File \"\/Users\/johngiorgi\/.pyenv\/versions\/3.7.13\/envs\/datasets\/lib\/python3.7\/site-packages\/pandas\/io\/parsers\/readers.py\", line 586, in read_csv\r\n return _read(filepath_or_buffer, kwds)\r\n File \"\/Users\/johngiorgi\/.pyenv\/versions\/3.7.13\/envs\/datasets\/lib\/python3.7\/site-packages\/pandas\/io\/parsers\/readers.py\", line 488, in _read\r\n return parser.read(nrows)\r\n File \"\/Users\/johngiorgi\/.pyenv\/versions\/3.7.13\/envs\/datasets\/lib\/python3.7\/site-packages\/pandas\/io\/parsers\/readers.py\", line 1047, in read\r\n index, columns, col_dict = self._engine.read(nrows)\r\n File \"\/Users\/johngiorgi\/.pyenv\/versions\/3.7.13\/envs\/datasets\/lib\/python3.7\/site-packages\/pandas\/io\/parsers\/c_parser_wrapper.py\", line 224, in read\r\n chunks = self._reader.read_low_memory(nrows)\r\n File \"pandas\/_libs\/parsers.pyx\", line 801, in pandas._libs.parsers.TextReader.read_low_memory\r\n File \"pandas\/_libs\/parsers.pyx\", line 857, in pandas._libs.parsers.TextReader._read_rows\r\n File \"pandas\/_libs\/parsers.pyx\", line 843, in pandas._libs.parsers.TextReader._tokenize_rows\r\n File \"pandas\/_libs\/parsers.pyx\", line 1925, in pandas._libs.parsers.raise_parser_error\r\npandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 2\r\n```\r\n\r\nI think this may have to do with unusual line terminators in the original data. When I open it in VSCode, it complains:\r\n\r\n```\r\nThe file 'dev-inputs.csv' contains one or more unusual line terminator characters, like Line Separator (LS) or Paragraph Separator (PS).\r\n\r\nIt is recommended to remove them from the file. This can be configured via `editor.unusualLineTerminators`.\r\n```\r\n\r\nTagging the organizers of the shared task in case they want to sanity check this or add any info to the model card :) @lucylw @jayded\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4339\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4339\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4339","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4339","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4339.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4339.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4338","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4338\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4338\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4338\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4338","id":1234478851,"node_id":"PR_kwDODunzps43vwsm","number":4338,"title":"Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Summary of CircleCI errors:\r\n\r\n- **XSum**: missing 6 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', and 'source_datasets'\r\n- **Yelp_polarity**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652389328000,"updated_at":1652716262000,"closed_at":1652715779000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding evaluation metadata for:\r\n\r\n- Tweet Eval\r\n- Tweets Hate Speech Detection\r\n- VCTK\r\n- Weibo NER\r\n- Wisesight Sentiment\r\n- XSum\r\n- Yahoo Answers Topics\r\n- Yelp Polarity\r\n- Yelp Review Full","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4338\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4338\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4338","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4338","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4338.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4338.patch","merged_at":1652715779000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4337","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4337\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4337\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4337\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4337","id":1234470083,"node_id":"PR_kwDODunzps43vuzF","number":4337,"title":"Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.\r\n- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n\r\nThere are also some timeout errors, I don't really understand the source though :confused: ","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652388722000,"updated_at":1652718379000,"closed_at":1652717910000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding evaluation metadata for:\r\n- Reddit\r\n- Rotten Tomatoes\r\n- SemEval 2010\r\n- Sentiment 140\r\n- SMS Spam\r\n- Snips\r\n- SQuAD\r\n- SQuAD v2\r\n- Timit ASR","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4337\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4337\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4337","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4337","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4337.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4337.patch","merged_at":1652717910000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4336","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4336\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4336\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4336\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4336","id":1234446174,"node_id":"PR_kwDODunzps43vpqG","number":4336,"title":"Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Summary of CircleCI errors:\r\n- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.\r\n- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n","The CI errors about missing content in the dataset cards can be ignored in this PR btw","The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4336). All of your documentation changes will be reflected on that endpoint."],"created_at":1652387085000,"updated_at":1652718300000,"closed_at":1652718299000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding evaluation metadata for : \r\n- Health Fact\r\n- Jigsaw Toxicity\r\n- LIAR\r\n- LJ Speech\r\n- MSRA NER\r\n- Multi News\r\n- NCBI Diseas\r\n- Poem Sentiment","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4336\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4336\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4336","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4336","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4336.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4336.patch","merged_at":1652718299000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4335","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4335\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4335\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4335\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4335","id":1234157123,"node_id":"PR_kwDODunzps43usJP","number":4335,"title":"Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Summary of CircleCI errors:\r\n- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **Conllpp**: expected some content in section `Citation Information` but it is empty.\r\n- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags\r\n- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'\r\n- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty","And yes we can ignore all the CI errors related to missing content in the dataset cards, these issues can be fixed in other PRs","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652369296000,"updated_at":1652718670000,"closed_at":1652718189000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding evaluation metadata for:\r\n- BillSum\r\n- CoNLL2003\r\n- CoNLLPP\r\n- CUAD\r\n- Emotion\r\n- GigaWord\r\n- GLUE\r\n- Hate Speech 18 \r\n- Hate Speech Offensive","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4335\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4335\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4335","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4335","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4335.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4335.patch","merged_at":1652718188000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4334","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4334\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4334\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4334\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4334","id":1234103477,"node_id":"PR_kwDODunzps43uguB","number":4334,"title":"Adding eval metadata for billsum","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652366948000,"updated_at":1652366964000,"closed_at":1652366964000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding eval metadata for billsum","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4334\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4334\/timeline","performed_via_github_app":null,"state_reason":null,"draft":true,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4334","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4334","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4334.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4334.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4333","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4333\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4333\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4333\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4333","id":1234038705,"node_id":"PR_kwDODunzps43uSuj","number":4333,"title":"Adding eval metadata for Banking 77","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq , Circle CI is giving me an error, saying that ['extended'] is a key that shouldn't be in the dataset metadata, but it was there before my modification (so I don't want to remove it)"],"created_at":1652364305000,"updated_at":1652389412000,"closed_at":1652389411000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding eval metadata for Banking 77","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4333\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4333\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4333","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4333","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4333.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4333.patch","merged_at":1652389411000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4332","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4332\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4332\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4332\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4332","id":1234021188,"node_id":"PR_kwDODunzps43uO8S","number":4332,"title":"Adding eval metadata for arabic speech corpus","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652363498000,"updated_at":1652389401000,"closed_at":1652389400000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding eval metadata for arabic speech corpus","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4332\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4332\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4332","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4332","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4332.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4332.patch","merged_at":1652389400000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4331","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4331\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4331\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4331\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4331","id":1234016110,"node_id":"PR_kwDODunzps43uN2R","number":4331,"title":"Adding eval metadata to Amazon Polarity","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652363279000,"updated_at":1652389394000,"closed_at":1652389393000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding eval metadata to Amazon Polarity","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4331\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4331\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4331","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4331","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4331.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4331.patch","merged_at":1652389393000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4330","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4330\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4330\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4330\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4330","id":1233992681,"node_id":"PR_kwDODunzps43uIwm","number":4330,"title":"Adding eval metadata to Allocin\u00e9 dataset","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652362299000,"updated_at":1652389385000,"closed_at":1652389385000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding eval metadata to Allocin\u00e9 dataset","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4330\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4330\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4330","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4330","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4330.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4330.patch","merged_at":1652389385000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4329","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4329\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4329\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4329\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4329","id":1233991207,"node_id":"PR_kwDODunzps43uIcF","number":4329,"title":"Adding eval metadata for AG News","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652362232000,"updated_at":1652389361000,"closed_at":1652389360000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding eval metadata for AG News","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4329\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4329\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4329","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4329","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4329.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4329.patch","merged_at":1652389360000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4328","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4328\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4328\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4328\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4328","id":1233856690,"node_id":"PR_kwDODunzps43trrd","number":4328,"title":"Fix and clean Apache Beam functionality","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652355667000,"updated_at":1653399791000,"closed_at":1653399272000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4328\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4328\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4328","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4328","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4328.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4328.patch","merged_at":1653399272000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4327","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4327\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4327\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4327\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4327","id":1233840020,"node_id":"I_kwDODunzps5JiueU","number":4327,"title":"`wikipedia` pre-processed datasets","user":{"login":"vpj","id":81152,"node_id":"MDQ6VXNlcjgxMTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/81152?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vpj","html_url":"https:\/\/github.com\/vpj","followers_url":"https:\/\/api.github.com\/users\/vpj\/followers","following_url":"https:\/\/api.github.com\/users\/vpj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vpj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vpj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vpj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vpj\/orgs","repos_url":"https:\/\/api.github.com\/users\/vpj\/repos","events_url":"https:\/\/api.github.com\/users\/vpj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vpj\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @vpj, thanks for reporting.\r\n\r\nI'm sorry, but I can't reproduce your bug: I load \"20220301.simple\"in 9 seconds:\r\n```shell\r\ntime python -c \"from datasets import load_dataset; load_dataset('wikipedia', '20220301.simple')\"\r\n\r\nDownloading and preparing dataset wikipedia\/20220301.simple (download: 228.58 MiB, generated: 224.18 MiB, post-processed: Unknown size, total: 452.76 MiB) to ...\/.cache\/huggingface\/datasets\/wikipedia\/20220301.simple\/2.0.0\/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.66k\/1.66k [00:00<00:00, 1.02MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 235M\/235M [00:02<00:00, 82.8MB\/s]\r\nDataset wikipedia downloaded and prepared to ...\/.cache\/huggingface\/datasets\/wikipedia\/20220301.simple\/2.0.0\/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559. Subsequent calls will reuse this data.\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 290.75it\/s]\r\n\r\nreal\t0m9.693s\r\nuser\t0m6.002s\r\nsys\t0m3.260s\r\n```\r\n\r\nCould you please check your environment info, as requested when opening this issue?\r\n```\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nMaybe you are using an old version of `datasets`...","Downloading and processing `wikipedia simple` dataset completed in under 11sec on M1 Mac. Could you please check `dataset` version as mentioned by @albertvillanova? Also check system specs, if system is under load processing could take some time I guess."],"created_at":1652354742000,"updated_at":1661934417000,"closed_at":1661934417000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n[Wikipedia](https:\/\/huggingface.co\/datasets\/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"wikipedia\", \"20220301.en\")\r\n```\r\n\r\n## Expected results\r\nTo load the dataset\r\n\r\n## Actual results\r\nTakes a very long time to load (after downloading)\r\n\r\nAfter `Downloading data files: 100%`. It takes hours and gets killed.\r\nTried `wikipedia.simple` and it got processed after ~30mins.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4327\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4327\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4326","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4326\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4326\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4326\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4326","id":1233818489,"node_id":"PR_kwDODunzps43tjWy","number":4326,"title":"Fix type hint and documentation for `new_fingerprint`","user":{"login":"fxmarty","id":9808326,"node_id":"MDQ6VXNlcjk4MDgzMjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9808326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fxmarty","html_url":"https:\/\/github.com\/fxmarty","followers_url":"https:\/\/api.github.com\/users\/fxmarty\/followers","following_url":"https:\/\/api.github.com\/users\/fxmarty\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fxmarty\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fxmarty\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fxmarty\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fxmarty\/orgs","repos_url":"https:\/\/api.github.com\/users\/fxmarty\/repos","events_url":"https:\/\/api.github.com\/users\/fxmarty\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fxmarty\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652353508000,"updated_at":1654088685000,"closed_at":1654088178000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`.\r\n\r\nThere was some documentation missing as well.\r\n\r\nNote that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator.\r\n\r\nThe modifications in this PR are fine since here https:\/\/github.com\/huggingface\/datasets\/blob\/aa743886221d76afb409d263e1b136e7a71fe2b4\/src\/datasets\/fingerprint.py#L446-L454\r\n\r\nfor the non-inplace case we make sure to auto-generate a new fingerprint (as indicated in the doc).","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4326\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4326\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4326","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4326","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4326.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4326.patch","merged_at":1654088178000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4325","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4325\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4325\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4325\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4325","id":1233812191,"node_id":"I_kwDODunzps5Jinrf","number":4325,"title":"Dataset Viewer issue for strombergnlp\/offenseval_2020, strombergnlp\/polstance","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Not sure if it's related... I was going to raise an issue for https:\/\/huggingface.co\/datasets\/domenicrosati\/TruthfulQA which also has the same issue... https:\/\/huggingface.co\/datasets\/domenicrosati\/TruthfulQA\/viewer\/domenicrosati--TruthfulQA\/train \r\n\r\n","Yes, it's related. The backend behind the dataset viewer is currently under too much load, and these datasets are still in the jobs queue. We're actively working on this issue, and we expect to fix the issue permanently soon. Thanks for your patience \ud83d\ude4f \u00a0","Thanks @severo and no worries! - a suggestion for a UI usability thing maybe is to indicate that the dataset processing is in the job queue (rather than no data?)","Thanks, these are working great now (including @domenicrosati 's, afaics!)"],"created_at":1652353148000,"updated_at":1652439435000,"closed_at":1652439422000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/strombergnlp\/offenseval_2020\/viewer\/ar\/train\n\n### Description\n\nThe viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time.\r\n\r\n* https:\/\/huggingface.co\/datasets\/strombergnlp\/polstance\/viewer\/PolStance\/train\r\n\r\n* https:\/\/huggingface.co\/datasets\/strombergnlp\/offenseval_2020\/viewer\/ar\/train\r\n\r\nWhile offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https:\/\/huggingface.co\/datasets\/strombergnlp\/shaj , so I'm a bit stumped!\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4325\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4325\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4324","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4324\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4324\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4324\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4324","id":1233780870,"node_id":"I_kwDODunzps5JigCG","number":4324,"title":"Support >1 PWC dataset per dataset card","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @leondz, I agree it would be nice. We'll see what we can do ;)"],"created_at":1652351347000,"updated_at":1652441129000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nSome datasets cover more than one dataset on PapersWithCode. For example, the OffensEval 2020 challenge involved five languages, and there's one dataset to cover all five datasets, [`strombergnlp\/offenseval_2020`](https:\/\/huggingface.co\/datasets\/strombergnlp\/offenseval_2020). However, the yaml `paperswithcode_id:` dataset card entry only supports one value; when multiple are added, the PWC link disappears from the dataset page.\r\n\r\nBecause the link from a PapersWithCode dataset to a Hugging Face Hub entry can't be entered manually and seems to be scraped, this means end users don't have a way of getting a dataset reader link to appear on all the PWC datasets supported by one HF Hub Dataset reader.\r\n\r\nIt's not super unusual to have papers introduce multiple parallel variants of a dataset and would be handy to reflect this, so e.g. dataset maintainers can DRY, and so dataset users can keep what they're doing simple.\r\n\r\n**Describe the solution you'd like**\r\nI'd like `paperswithcode_id:` to support lists and be able to connect with multiple PWC datasets.\r\n\r\n**Describe alternatives you've considered**\r\nDe-normalising the datasets on HF Hub to create multiple readers for each variation on a task, i.e. instead of a single `offenseval_2020`, having `offenseval_2020_ar`, `offenseval_2020_da`, `offenseval_2020_gr`, ...\r\n\r\n**Additional context**\r\nHope that's enough\r\n\r\n**Priority**\r\nLow","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4324\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4324\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4323","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4323\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4323\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4323\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4323","id":1233634928,"node_id":"I_kwDODunzps5Jh8Zw","number":4323,"title":"Audio can not find value[\"bytes\"]","user":{"login":"YooSungHyun","id":34292279,"node_id":"MDQ6VXNlcjM0MjkyMjc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34292279?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YooSungHyun","html_url":"https:\/\/github.com\/YooSungHyun","followers_url":"https:\/\/api.github.com\/users\/YooSungHyun\/followers","following_url":"https:\/\/api.github.com\/users\/YooSungHyun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YooSungHyun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YooSungHyun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YooSungHyun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YooSungHyun\/orgs","repos_url":"https:\/\/api.github.com\/users\/YooSungHyun\/repos","events_url":"https:\/\/api.github.com\/users\/YooSungHyun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YooSungHyun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["![image](https:\/\/user-images.githubusercontent.com\/34292279\/168063684-fff5c12a-8b1e-4c65-b18b-36100ab8a1af.png)\r\n\r\nthat is reason my bytes`s empty\r\nbut i have some confused why path prior is higher than bytes?\r\n\r\nif you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\nbecause we have path and bytes already","> but i have some confused why path prior is higher than bytes?\r\n\r\nIf the audio file is already available locally, we don't need to store the bytes again.\r\n\r\nIf you don't specify a \"path\" to a local file, then the bytes are stored. You can set \"path\" to None for example.\r\n\r\n> if you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\n> because we have path and bytes already\r\n\r\nIt's useful to pass both \"path\" and \"bytes\" in `_generate_examples`:\r\n- when the dataset has been downloaded, then the \"path\" to the audio files are stored and we can ignore \"bytes\" in order to save disk space.\r\n- when the dataset is loaded in streaming mode, the audio files are not available on your disk and therefore we use the \"bytes\" ","@lhoestq \r\nFirst of all, thx for reply\r\n\r\nbut, if i put in \"bytes\" and \"path\"\r\nex) {\"bytes\":\"blah blah~\", \"path\":\"blah blah~\"}\r\n\r\nthat source working that my bytes to empty first,\r\nand then, re-calculate my bytes!\r\n![image](https:\/\/user-images.githubusercontent.com\/34292279\/168534687-1fb60d8c-d369-47d2-a4bb-db68f95194b4.png)\r\n\r\nif you have some pcm file, pcm is can read bytes.\r\nso, i put in bytes and paths.\r\nbut bytes is been None why encode_example func make None\r\nand then, on decode_example func, we no have bytes. so, calculate bytes to path.\r\npcm is not support librosa or soundfile, error occured!\r\n\r\nthe most important thing is not announced anywhere this situation can be reproduced\r\n\r\nis that truly right process flow?","I don't think we support PCM files, feel free to convert your data to WAV for now.\r\n\r\nIt would be awesome to support PCM files though, let me know if you'd like to contribute this feature, I'd be happy to help","@lhoestq oh, how can i contribute?","You can clone the repository (see the guide on [how to contribute](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/CONTRIBUTING.md#how-to-create-a-pull-request)) and see how we can make the `Image.encode_example` method work with PCM data.\r\n\r\nThere might be other ways to approach this problem, but here is what I think is a reasonable one:\r\n\r\nI think `Image.encode_example` should be able to take PCM bytes as input and the sampling rate, and return the WAV bytes (built by combining the PCM bytes and the sampling rate info), so that `Image.decode_example` can read it.\r\n\r\nTo check if the input bytes are PCM data, you can just check if the extension of the `path` is \".pcm\".\r\n","maybe i can start to contribute on this sunday!\r\n@lhoestq ","@lhoestq plz check my pr #4409 \r\n\r\nam i wrong somting?","Thanks, I reviewed your PR :)"],"created_at":1652344318000,"updated_at":1657199768000,"closed_at":1657199768000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nI wrote down _generate_examples like:\r\n![image](https:\/\/user-images.githubusercontent.com\/34292279\/168027186-2fe8b255-2cd8-4b9b-ab1e-8d5a7182979b.png)\r\n\r\nbut where is the bytes?\r\n![image](https:\/\/user-images.githubusercontent.com\/34292279\/168027330-f2496dd0-1d99-464c-b15c-bc57eee0415a.png)\r\n\r\n\r\n## Expected results\r\nvalue[\"bytes\"] is not None, so i can make datasets with bytes, not path\r\n\r\n## bytes looks like:\r\nblah blah~~\r\n\\xfe\\x03\\x00\\xfb\\x06\\x1c\\x0bo\\x074\\x03\\xaf\\x01\\x13\\x04\\xbc\\x06\\x8c\\x05y\\x05,\\t7\\x08\\xaf\\x03\\xc0\\xfe\\xe8\\xfc\\x94\\xfe\\xb7\\xfd\\xea\\xfa\\xd5\\xf9$\\xf9>\\xf9\\x1f\\xf8\\r\\xf5F\\xf49\\xf4\\xda\\xf5-\\xf8\\n\\xf8k\\xf8\\x07\\xfb\\x18\\xfd\\xd9\\xfdv\\xfd\"\\xfe\\xcc\\x01\\x1c\\x04\\x08\\x04@\\x04{\\x06^\\tf\\t\\x1e\\x07\\x8b\\x06\\x02\\x08\\x13\\t\\x07\\x08 \\x06g\\x06\"\\x06\\xa0\\x03\\xc6\\x002\\xff \\xff\\x1d\\xff\\x19\\xfd?\\xfb\\xdb\\xfa\\xfc\\xfa$\\xfb}\\xf9\\xe5\\xf7\\xf9\\xf7\\xce\\xf8.\\xf9b\\xf9\\xc5\\xf9\\xc0\\xfb\\xfa\\xfcP\\xfc\\xba\\xfbQ\\xfc1\\xfe\\x9f\\xff\\x12\\x00\\xa2\\x00\\x18\\x02Z\\x03\\x02\\x04\\xb1\\x03\\xc5\\x03W\\x04\\x82\\x04\\x8f\\x04U\\x04\\xb6\\x04\\x10\\x05{\\x04\\x83\\x02\\x17\\x01\\x1d\\x00\\xa0\\xff\\xec\\xfe\\x03\\xfe#\\xfe\\xc2\\xfe2\\xff\\xe6\\xfe\\x9a\\xfe~\\x01\\x91\\x08\\xb3\\tU\\x05\\x10\\x024\\x02\\xe4\\x05\\xa8\\x07\\xa7\\x053\\x07I\\n\\x91\\x07v\\x02\\x95\\xfd\\xbb\\xfd\\x96\\xff\\x01\\xfe\\x1e\\xfb\\xbb\\xf9S\\xf8!\\xf8\\xf4\\xf5\\xd6\\xf3\\xf7\\xf3l\\xf4d\\xf6l\\xf7d\\xf6b\\xf7\\xc1\\xfa(\\xfd\\xcf\\xfd*\\xfdq\\xfe\\xe9\\x01\\xa8\\x03t\\x03\\x17\\x04B\\x07\\xce\\t\\t\\t\\xeb\\x06\\x0c\\x07\\x95\\x08\\x92\\t\\xbc\\x07O\\x06\\xfb\\x06\\xd2\\x06U\\x04\\x00\\x02\\x92\\x00\\xdc\\x00\\x84\\x00 \\xfeT\\xfc\\xf1\\xfb\\x82\\xfc\\x97\\xfb}\\xf9\\x00\\xf8_\\xf8\\x0b\\xf9\\xe5\\xf8\\xe2\\xf7\\xaa\\xf8\\xb2\\xfa\\x10\\xfbl\\xfa\\xf5\\xf9Y\\xfb\\xc0\\xfd\\xe8\\xfe\\xec\\xfe1\\x00\\xad\\x01\\xec\\x02E\\x03\\x13\\x03\\x9b\\x03o\\x04\\xce\\x04\\xa8\\x04\\xb2\\x04\\x1b\\x05\\xc0\\x05\\xd2\\x04\\xe8\\x02z\\x01\\xbe\\x00\\xae\\x00\\x07\\x00$\\xff|\\xff\\x8e\\x00\\x13\\x00\\x10\\xff\\x98\\xff0\\x05{\\x0b\\x05\\t\\xaa\\x03\\x82\\x01n\\x03\r\nblah blah~~\r\n\r\nthat function not return None\r\n\r\n## Environment info\r\n\r\n- `datasets` version:2.2.1\r\n- Platform:ubuntu 18.04\r\n- Python version:3.6.9\r\n- PyArrow version:6.0.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4323\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4323\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4322","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4322\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4322\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4322\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4322","id":1233596947,"node_id":"PR_kwDODunzps43s1wy","number":4322,"title":"Added stratify option to train_test_split function.","user":{"login":"nandwalritik","id":48522685,"node_id":"MDQ6VXNlcjQ4NTIyNjg1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48522685?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nandwalritik","html_url":"https:\/\/github.com\/nandwalritik","followers_url":"https:\/\/api.github.com\/users\/nandwalritik\/followers","following_url":"https:\/\/api.github.com\/users\/nandwalritik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nandwalritik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nandwalritik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nandwalritik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nandwalritik\/orgs","repos_url":"https:\/\/api.github.com\/users\/nandwalritik\/repos","events_url":"https:\/\/api.github.com\/users\/nandwalritik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nandwalritik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Nice thank you ! This will be super useful :)\r\n> \r\n> Could you also add some tests in test_arrow_dataset.py and add an example of usage in the `Example:` section of the `train_test_split` docstring ?\r\n\r\nI will try to do it, is there any documentation for adding test cases? I have never done it before.","Thanks for the changes !\r\n\r\n> I will try to do it, is there any documentation for adding test cases? I have never done it before.\r\n\r\nYou can just add a function `test_train_test_split_startify` in `test_arrow_dataset.py`.\r\n\r\nIn this function you can define a dataset and make sure that `train_test_split` with the `stratify` argument works as expected.\r\n\r\nYou can do `pytest tests\/test_arrow_dataset.py::test_train_test_split_startify` to run your test.\r\n\r\nFeel free to get some inspiration from other tests like `test_interleave_datasets` for example","I have added tests for stratified train_test_split in `test_arrow_dataset.py` file inside `test_train_test_split_startify` function. I have also added example usage with `stratify` arg in `Example:` section of the `train_test_split` docstring.\r\nResults of tests:\r\n```\r\n(data) nandwalritik@hp:~\/datasets$ pytest tests\/test_arrow_dataset.py::test_train_test_split_startify -W ignore\r\n============================================================================ test session starts ============================================================================\r\nplatform linux -- Python 3.9.5, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: \/home\/nandwalritik\/datasets\r\nplugins: datadir-1.3.1, forked-1.4.0, xdist-2.5.0\r\ncollected 1 item \r\n\r\ntests\/test_arrow_dataset.py . [100%]\r\n\r\n============================================================================= 1 passed in 0.12s =============================================================================\r\n\r\n```","Thanks a lot !\r\n\r\n`utils\/stratify.py` sounds good yes :)\r\n\r\nAlso feel free to merge `master` into your branch to fix the CI ;)","Added all the changes as were suggested and rebased with `main`.","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652342431000,"updated_at":1653511930000,"closed_at":1653511431000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds `stratify` option to `train_test_split` method. I took reference from scikit-learn's `StratifiedShuffleSplit` class for implementing stratified split and integrated the changes as were suggested by @lhoestq.\r\n\r\nIt fixes #3452.\r\n\r\n@lhoestq Please review and let me know, if any changes are required.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4322\/reactions","total_count":3,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4322\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4322","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4322","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4322.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4322.patch","merged_at":1653511431000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4321","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4321\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4321\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4321\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4321","id":1233273351,"node_id":"PR_kwDODunzps43ryW7","number":4321,"title":"Adding dataset enwik8","user":{"login":"HallerPatrick","id":22773355,"node_id":"MDQ6VXNlcjIyNzczMzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22773355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/HallerPatrick","html_url":"https:\/\/github.com\/HallerPatrick","followers_url":"https:\/\/api.github.com\/users\/HallerPatrick\/followers","following_url":"https:\/\/api.github.com\/users\/HallerPatrick\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/HallerPatrick\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/HallerPatrick\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/HallerPatrick\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/HallerPatrick\/orgs","repos_url":"https:\/\/api.github.com\/users\/HallerPatrick\/repos","events_url":"https:\/\/api.github.com\/users\/HallerPatrick\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/HallerPatrick\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thank you for the great feedback! Looks like all tests are passing now :)","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652311502000,"updated_at":1654093650000,"closed_at":1654092246000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Because I regularly work with enwik8, I would like to contribute the dataset loader \ud83e\udd17 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4321\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4321\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4321","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4321","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4321.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4321.patch","merged_at":1654092246000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4320","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4320\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4320\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4320\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4320","id":1233208864,"node_id":"I_kwDODunzps5JgUYg","number":4320,"title":"Multi-news dataset loader attempts to strip wrong character from beginning of summaries","user":{"login":"JohnGiorgi","id":8917831,"node_id":"MDQ6VXNlcjg5MTc4MzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8917831?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JohnGiorgi","html_url":"https:\/\/github.com\/JohnGiorgi","followers_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/followers","following_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/orgs","repos_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/repos","events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting :)\r\n\r\nThis dataset was simply converted from [tensorflow datasets](https:\/\/github.com\/tensorflow\/datasets\/blob\/master\/tensorflow_datasets\/summarization\/multi_news.py)\r\n\r\nI think we can just remove the `.strip(\"- \")` and keep this character","Cool! I made a PR."],"created_at":1652305001000,"updated_at":1652709130000,"closed_at":1652709130000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nThe `multi_news.py` data loader has [a line which attempts to strip `\"- \"` from the beginning of summaries](https:\/\/github.com\/huggingface\/datasets\/blob\/aa743886221d76afb409d263e1b136e7a71fe2b4\/datasets\/multi_news\/multi_news.py#L97). The actual character in the multi-news dataset, however, is `\"\u2013 \"`, which is different, e.g. `\"\u2013 \" != \"- \"`.\r\n\r\nI would have just opened a PR to fix the mistake, but I am wondering what the motivation for stripping this character is? AFAICT most approaches just leave it in, e.g. the current SOTA on this dataset, [PRIMERA](https:\/\/huggingface.co\/allenai\/PRIMERA-multinews) (you can see its in the generated summaries of the model in their [example notebook](https:\/\/github.com\/allenai\/PRIMER\/blob\/main\/Evaluation_Example.ipynb)).\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.2.0\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4320\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4320\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4319","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4319\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4319\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4319\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4319","id":1232982023,"node_id":"PR_kwDODunzps43q0UY","number":4319,"title":"Adding eval metadata for ade v2","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652290580000,"updated_at":1652362191000,"closed_at":1652361739000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adding metadata to allow evaluation","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4319\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4319\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4319","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4319","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4319.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4319.patch","merged_at":1652361739000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4318","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4318\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4318\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4318\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4318","id":1232905488,"node_id":"PR_kwDODunzps43qkkQ","number":4318,"title":"Don't check f.loc in _get_extraction_protocol_with_magic_number","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652286429000,"updated_at":1652288222000,"closed_at":1652287591000,"author_association":"MEMBER","active_lock_reason":null,"body":"`f.loc` doesn't always exist for file-like objects in python. I removed it since it was not necessary anyway (we always seek the file to 0 after reading the magic number)\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/4310","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4318\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4318\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4318","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4318","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4318.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4318.patch","merged_at":1652287591000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4317","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4317\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4317\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4317\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4317","id":1232737401,"node_id":"PR_kwDODunzps43qBzh","number":4317,"title":"Fix cnn_dailymail (dm stories were ignored)","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652279125000,"updated_at":1652284809000,"closed_at":1652284357000,"author_association":"MEMBER","active_lock_reason":null,"body":"https:\/\/github.com\/huggingface\/datasets\/pull\/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset.\r\n\r\nI fixed that, and removed the google drive link (it has annoying quota limitations issues)\r\n\r\nWe can do a patch release after this is merged","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4317\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4317\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4317","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4317","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4317.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4317.patch","merged_at":1652284357000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4316","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4316\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4316\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4316\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4316","id":1232681207,"node_id":"PR_kwDODunzps43p1Za","number":4316,"title":"Support passing config_kwargs to CLI run_beam","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652277217000,"updated_at":1652279809000,"closed_at":1652279311000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR supports passing `config_kwargs` to CLI run_beam, so that for example for \"wikipedia\" dataset, we can pass:\r\n```\r\n--date 20220501 --language ca\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4316\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4316\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4316","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4316","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4316.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4316.patch","merged_at":1652279311000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4315","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4315\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4315\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4315\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4315","id":1232549330,"node_id":"PR_kwDODunzps43pZ6p","number":4315,"title":"Fix CLI run_beam namespace","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652271660000,"updated_at":1652274780000,"closed_at":1652274308000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, it raises TypeError:\r\n```\r\nTypeError: __init__() got an unexpected keyword argument 'namespace'\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4315\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4315\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4315","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4315","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4315.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4315.patch","merged_at":1652274308000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4314","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4314\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4314\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4314\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4314","id":1232326726,"node_id":"PR_kwDODunzps43oqXD","number":4314,"title":"Catch pull error when mirroring","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652261915000,"updated_at":1652273647000,"closed_at":1652273202000,"author_association":"MEMBER","active_lock_reason":null,"body":"Catch pull errors when mirroring so that the script continues to update the other datasets.\r\n\r\nThe error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4314\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4314\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4314","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4314","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4314.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4314.patch","merged_at":1652273202000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4313","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4313\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4313\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4313\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4313","id":1231764100,"node_id":"PR_kwDODunzps43m4qB","number":4313,"title":"Add API code examples for Builder classes","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652221352000,"updated_at":1652374963000,"closed_at":1652359017000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds API code examples for the Builder classes.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4313\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4313\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4313","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4313","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4313.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4313.patch","merged_at":1652359017000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4312","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4312\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4312\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4312\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4312","id":1231662775,"node_id":"PR_kwDODunzps43mlug","number":4312,"title":"added TR-News dataset","user":{"login":"batubayk","id":25901065,"node_id":"MDQ6VXNlcjI1OTAxMDY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25901065?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/batubayk","html_url":"https:\/\/github.com\/batubayk","followers_url":"https:\/\/api.github.com\/users\/batubayk\/followers","following_url":"https:\/\/api.github.com\/users\/batubayk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/batubayk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/batubayk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/batubayk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/batubayk\/orgs","repos_url":"https:\/\/api.github.com\/users\/batubayk\/repos","events_url":"https:\/\/api.github.com\/users\/batubayk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/batubayk\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1652214780000,"updated_at":1657120793000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4312\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4312\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4312","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4312","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4312.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4312.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4311","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4311\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4311\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4311\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4311","id":1231369438,"node_id":"PR_kwDODunzps43ln8-","number":4311,"title":"[Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Merging this one since mario is off, I took care of adding some tests to make sure everything is fine. Will do the release after it"],"created_at":1652197935000,"updated_at":1652203182000,"closed_at":1652202707000,"author_association":"MEMBER","active_lock_reason":null,"body":"I updated the `docs\/source\/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`.\r\n\r\nWhile doing so I also improved a few aspects:\r\n- we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary\r\n- raise informative error messages when metadata and images aren't linked correctly:\r\n - when an image is missing a metadata file\r\n - when a metadata file is missing an image\r\n\r\nI added some tests for these changes as well\r\n\r\ncc @mariosasko ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4311\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4311\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4311","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4311","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4311.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4311.patch","merged_at":1652202707000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4310","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4310\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4310\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4310\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4310","id":1231319815,"node_id":"I_kwDODunzps5JZHMH","number":4310,"title":"Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'","user":{"login":"milmin","id":72745467,"node_id":"MDQ6VXNlcjcyNzQ1NDY3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/72745467?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/milmin","html_url":"https:\/\/github.com\/milmin","followers_url":"https:\/\/api.github.com\/users\/milmin\/followers","following_url":"https:\/\/api.github.com\/users\/milmin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/milmin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/milmin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/milmin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/milmin\/orgs","repos_url":"https:\/\/api.github.com\/users\/milmin\/repos","events_url":"https:\/\/api.github.com\/users\/milmin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/milmin\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1652195573000,"updated_at":1652287591000,"closed_at":1652287591000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nLoading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine.\r\n\r\nIn the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n# path is the path to parquet files\r\ndata_files = {\"train\": path + \"meta_train.parquet.gzip\", \"test\": path + \"meta_test.parquet.gzip\"}\r\ndataset = load_dataset(\"parquet\", data_files=data_files, streaming=True)\r\n```\r\n\r\n## Expected results\r\nA dataset object `datasets.dataset_dict.DatasetDict`\r\n\r\n## Actual results\r\n```\r\nAttributeError Traceback (most recent call last)\r\n in \r\n 11 \r\n 12 data_files = {\"train\": path + \"meta_train.parquet.gzip\", \"test\": path + \"meta_test.parquet.gzip\"}\r\n---> 13 dataset = load_dataset(\"parquet\", data_files=data_files, streaming=True)\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1679 if streaming:\r\n 1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token)\r\n-> 1681 return builder_instance.as_streaming_dataset(\r\n 1682 split=split,\r\n 1683 use_auth_token=use_auth_token,\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)\r\n 904 )\r\n 905 self._check_manual_download(dl_manager)\r\n--> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 907 # By default, return all splits\r\n 908 if split is None:\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/packaged_modules\/parquet\/parquet.py in _split_generators(self, dl_manager)\r\n 30 if not self.config.data_files:\r\n 31 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 32 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 33 if isinstance(data_files, (str, list, tuple)):\r\n 34 files = data_files\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in download_and_extract(self, url_or_urls)\r\n 798 \r\n 799 def download_and_extract(self, url_or_urls):\r\n--> 800 return self.extract(self.download(url_or_urls))\r\n 801 \r\n 802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]:\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in extract(self, path_or_paths)\r\n 776 \r\n 777 def extract(self, path_or_paths):\r\n--> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n 779 return urlpaths\r\n 780 \r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)\r\n 312 num_proc = 1\r\n 313 if num_proc <= 1 or len(iterable) <= num_proc:\r\n--> 314 mapped = [\r\n 315 _single_map_nested((function, obj, types, None, True, None))\r\n 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in (.0)\r\n 313 if num_proc <= 1 or len(iterable) <= num_proc:\r\n 314 mapped = [\r\n--> 315 _single_map_nested((function, obj, types, None, True, None))\r\n 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 317 ]\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in _single_map_nested(args)\r\n 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 268 else:\r\n--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 270 if isinstance(data_struct, list):\r\n 271 return mapped\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in (.0)\r\n 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 268 else:\r\n--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 270 if isinstance(data_struct, list):\r\n 271 return mapped\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in _single_map_nested(args)\r\n 249 # Singleton first to spare some computation\r\n 250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 251 return function(data_struct)\r\n 252 \r\n 253 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in _extract(self, urlpath)\r\n 781 def _extract(self, urlpath: str) -> str:\r\n 782 urlpath = str(urlpath)\r\n--> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n 784 if protocol is None:\r\n 785 # no extraction\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token)\r\n 371 urlpath, kwargs = urlpath, {}\r\n 372 with fsspec.open(urlpath, **kwargs) as f:\r\n--> 373 return _get_extraction_protocol_with_magic_number(f)\r\n 374 \r\n 375 \r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f)\r\n 335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]:\r\n 336 \"\"\"read the magic number from a file-like object and return the compression protocol\"\"\"\r\n--> 337 prev_loc = f.loc\r\n 338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH)\r\n 339 f.seek(prev_loc)\r\n\r\n\/local_disk0\/.ephemeral_nfs\/envs\/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d\/lib\/python3.8\/site-packages\/fsspec\/implementations\/local.py in __getattr__(self, item)\r\n 337 \r\n 338 def __getattr__(self, item):\r\n--> 339 return getattr(self.f, item)\r\n 340 \r\n 341 def __enter__(self):\r\n\r\nAttributeError: '_io.BufferedReader' object has no attribute 'loc'\r\n```\r\n## Environment info\r\n- `datasets` version: 2.1.0\r\n- Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.2\r\n- `fsspec` version: 2021.08.1\r\n- `s3fs` version: 2021.08.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4310\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4310\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4309","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4309\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4309\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4309\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4309","id":1231232935,"node_id":"PR_kwDODunzps43lKpm","number":4309,"title":"[WIP] Add TEDLIUM dataset","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('.\/datasets\/tedlium', 'release1', cache_dir='\/home\/sanchitgandhi\/cache')\r\n```\r\n\r\n```\r\nDownloading and preparing dataset tedlium\/release1 to \/home\/sanchitgandhi\/cache\/tedlium\/release1\/1.0.1\/5a9fcb97b4b52d5a1c9dc7bde4b1d5994cd89c4a3425ea36c789bf6096fee4f0...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/sanchit_huggingface_co\/datasets\/src\/datasets\/load.py\", line 1703, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/sanchit_huggingface_co\/datasets\/src\/datasets\/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/sanchit_huggingface_co\/datasets\/src\/datasets\/builder.py\", line 1240, in _download_and_prepare\r\n raise MissingBeamOptions(\r\ndatasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https:\/\/beam.apache.org\/documentation\/runners\/capability-matrix\/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n `load_dataset('tedlium', 'release1', beam_runner='DirectRunner')`\r\n```\r\nSpecifying the `beam_runner='DirectRunner'` works:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('.\/datasets\/tedlium', 'release1', cache_dir='\/home\/sanchitgandhi\/cache', beam_runner='DirectRunner')\r\n```","Extra Python imports\/Linux packages:\r\n```\r\npip install pydub\r\nsudo apt install ffmpeg\r\n```","Script heavily inspired by the TF datasets script at: https:\/\/github.com\/tensorflow\/datasets\/blob\/master\/tensorflow_datasets\/audio\/tedlium.py\r\n\r\nThe TF datasets script uses the module AudioSegment from the package `pydub` (https:\/\/github.com\/jiaaro\/pydub), which is used to to open the audio files (stored in .sph format):\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/61bf6123634bf6e7c7287cd6097909eb26118c58\/datasets\/tedlium\/tedlium.py#L167-L170\r\nThis package requires the pip install of `pydub` and the system installation of `ffmpeg`: https:\/\/github.com\/jiaaro\/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nThe TF datasets script also uses `_build_pcollection`:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/8afbbb6fe66b40d05574e2e72e65e974c72ae769\/datasets\/tedlium\/tedlium.py#L200-L206\r\nHowever, I was advised against using `beam` logic. Thus, I have reverted to generating the examples file-by-file: https:\/\/github.com\/huggingface\/datasets\/blob\/61bf6123634bf6e7c7287cd6097909eb26118c58\/datasets\/tedlium\/tedlium.py#L112-L138\r\n\r\nI am now able to generate examples by running the `load_dataset` command:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('.\/datasets\/tedlium', 'release1', cache_dir='\/home\/sanchitgandhi\/cache')\r\n```\r\n\r\nHere, generating examples is **extremely** slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?","> This package requires the pip install of pydub and the system installation of ffmpeg: https:\/\/github.com\/jiaaro\/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nIt's ok, windows users will have have a bad time but I'm not sure we can do much about it.\r\n\r\n> Here, generating examples is extremely slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?\r\n\r\nNot at the moment. For such cases we advise hosting the dataset ourselves in a processed format. The license doesn't allow this since the license is \"NoDerivatives\". Currently the only way to parallelize it is by keeping is as a beam dataset and let users pay Google Dataflow to process it (or use spark or whatever).","Thanks for your super speedy reply @lhoestq!\r\n\r\nI\u2019ve uploaded the script and README.md to the org here: https:\/\/huggingface.co\/datasets\/LIUM\/tedlium\r\nIs any modification of the script required to be able to use it from the Hub? When I run:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntedlium = load_dataset(\"LIUM\/tedlium\", \"release1\") # for Release 1\r\n```\r\nI get the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nInput In [2], in ()\r\n----> 1 load_dataset(\"LIUM\/tedlium\", \"release1\")\r\n\r\nFile ~\/datasets\/src\/datasets\/load.py:1676, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1673 ignore_verifications = ignore_verifications or save_infos\r\n 1675 # Create a dataset builder\r\n-> 1676 builder_instance = load_dataset_builder(\r\n 1677 path=path,\r\n 1678 name=name,\r\n 1679 data_dir=data_dir,\r\n 1680 data_files=data_files,\r\n 1681 cache_dir=cache_dir,\r\n 1682 features=features,\r\n 1683 download_config=download_config,\r\n 1684 download_mode=download_mode,\r\n 1685 revision=revision,\r\n 1686 use_auth_token=use_auth_token,\r\n 1687 **config_kwargs,\r\n 1688 )\r\n 1690 # Return iterable dataset in case of streaming\r\n 1691 if streaming:\r\n\r\nFile ~\/datasets\/src\/datasets\/load.py:1502, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1500 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1501 download_config.use_auth_token = use_auth_token\r\n-> 1502 dataset_module = dataset_module_factory(\r\n 1503 path,\r\n 1504 revision=revision,\r\n 1505 download_config=download_config,\r\n 1506 download_mode=download_mode,\r\n 1507 data_dir=data_dir,\r\n 1508 data_files=data_files,\r\n 1509 )\r\n 1511 # Get dataset builder class from the processing script\r\n 1512 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~\/datasets\/src\/datasets\/load.py:1254, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1249 if isinstance(e1, FileNotFoundError):\r\n 1250 raise FileNotFoundError(\r\n 1251 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1252 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1253 ) from None\r\n-> 1254 raise e1 from None\r\n 1255 else:\r\n 1256 raise FileNotFoundError(\r\n 1257 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory.\"\r\n 1258 )\r\n\r\nFile ~\/datasets\/src\/datasets\/load.py:1227, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1225 raise e\r\n 1226 if filename in [sibling.rfilename for sibling in dataset_info.siblings]:\r\n-> 1227 return HubDatasetModuleFactoryWithScript(\r\n 1228 path,\r\n 1229 revision=revision,\r\n 1230 download_config=download_config,\r\n 1231 download_mode=download_mode,\r\n 1232 dynamic_modules_path=dynamic_modules_path,\r\n 1233 ).get_module()\r\n 1234 else:\r\n 1235 return HubDatasetModuleFactoryWithoutScript(\r\n 1236 path,\r\n 1237 revision=revision,\r\n (...)\r\n 1241 download_mode=download_mode,\r\n 1242 ).get_module()\r\n\r\nFile ~\/datasets\/src\/datasets\/load.py:940, in HubDatasetModuleFactoryWithScript.get_module(self)\r\n 938 def get_module(self) -> DatasetModule:\r\n 939 # get script and other files\r\n--> 940 local_path = self.download_loading_script()\r\n 941 dataset_infos_path = self.download_dataset_infos_file()\r\n 942 imports = get_imports(local_path)\r\n\r\nFile ~\/datasets\/src\/datasets\/load.py:918, in HubDatasetModuleFactoryWithScript.download_loading_script(self)\r\n 917 def download_loading_script(self) -> str:\r\n--> 918 file_path = hf_hub_url(path=self.name, name=self.name.split(\"\/\")[1] + \".py\", revision=self.revision)\r\n 919 download_config = self.download_config.copy()\r\n 920 if download_config.download_desc is None:\r\n\r\nTypeError: hf_hub_url() got an unexpected keyword argument 'name'\r\n```\r\n\r\nNote that I am able to load the dataset from the `datasets` repo with the following lines of code:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('.\/datasets\/tedlium', 'release1', cache_dir='\/home\/sanchitgandhi\/cache')\r\n```","What version of `datasets` do you have ?\r\nUpdating `datasets` should fix the error ;)\r\n","> This package requires the pip install of pydub and the system installation of ffmpeg: https:\/\/github.com\/jiaaro\/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\n`soundfile`, which is a required audio dependency, should also work with `.sph` files, no?","> `soundfile`, which is a required audio dependency, should also work with `.sph` files, no?\r\n\r\nAwesome, thanks for the pointer @mariosasko! Switched `pydub` to `soundfile`, and having specifying the `dtype` argument in `soundfile.read` as `np.int16`, the arrays match with those from `pydub` \u2705\r\n\r\nI also did some heavy optimising of the script with the processing of the `.stm` and `.sph` files - it now runs 2000x faster than before, so there probably isn't a need to upload the data to the Hub @lhoestq. The total processing time is just ~2mins now \ud83d\ude80\r\n","TEDLIUM completed and uploaded to the HF Hub: https:\/\/huggingface.co\/datasets\/LIUM\/tedlium","Awesome !"],"created_at":1652191967000,"updated_at":1655470480000,"closed_at":1655466241000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adds the TED-LIUM dataset https:\/\/www.tensorflow.org\/datasets\/catalog\/tedlium#tedliumrelease3 \r\n\r\nTODO:\r\n\r\n- [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script\r\n- [x] Make `load_dataset` work\r\n- [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~\r\n- [ ] ~~Create dummy data for continuous testing~~\r\n- [ ] ~~Dummy data tests~~\r\n- [ ] ~~Real data tests~~\r\n- [ ] Create the metadata JSON\r\n- [ ] Close PR and add directly to the Hub under LIUM org","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4309\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4309\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4309","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4309","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4309.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4309.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4308","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4308\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4308\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4308\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4308","id":1231217783,"node_id":"PR_kwDODunzps43lHdP","number":4308,"title":"Remove unused multiprocessing args from test CLI","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652191335000,"updated_at":1652273905000,"closed_at":1652273443000,"author_association":"MEMBER","active_lock_reason":null,"body":"Multiprocessing is not used in the test CLI.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4308\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4308\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4308","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4308","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4308.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4308.patch","merged_at":1652273442000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4307","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4307\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4307\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4307\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4307","id":1231175639,"node_id":"PR_kwDODunzps43k-Wo","number":4307,"title":"Add packaged builder configs to the documentation","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652189659000,"updated_at":1652191430000,"closed_at":1652190954000,"author_association":"MEMBER","active_lock_reason":null,"body":"Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4307\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4307\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4307","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4307","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4307.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4307.patch","merged_at":1652190954000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4306","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4306\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4306\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4306\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4306","id":1231137204,"node_id":"I_kwDODunzps5JYam0","number":4306,"title":"`load_dataset` does not work with certain filename.","user":{"login":"wusuowei60","id":57242693,"node_id":"MDQ6VXNlcjU3MjQyNjkz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57242693?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wusuowei60","html_url":"https:\/\/github.com\/wusuowei60","followers_url":"https:\/\/api.github.com\/users\/wusuowei60\/followers","following_url":"https:\/\/api.github.com\/users\/wusuowei60\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wusuowei60\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wusuowei60\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wusuowei60\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wusuowei60\/orgs","repos_url":"https:\/\/api.github.com\/users\/wusuowei60\/repos","events_url":"https:\/\/api.github.com\/users\/wusuowei60\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wusuowei60\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Never mind. It is because of the caching of datasets..."],"created_at":1652188444000,"updated_at":1652209116000,"closed_at":1652209089000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nThis is a weird bug that took me some time to find out.\r\n\r\nI have a JSON dataset that I want to load with `load_dataset` like this:\r\n\r\n```\r\ndata_files = dict(train=\"train.json.zip\", val=\"val.json.zip\")\r\ndataset = load_dataset(\"json\", data_files=data_files, field=\"data\")\r\n```\r\n\r\n## Expected results\r\nNo error.\r\n\r\n## Actual results\r\nThe val file is loaded as expected, but the train file throws JSON decoding error:\r\n\r\n```\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 :5 in \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/load.py:1687 in \u2502\r\n\u2502 load_dataset \u2502\r\n\u2502 \u2502\r\n\u2502 1684 \u2502 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES \u2502\r\n\u2502 1685 \u2502 \u2502\r\n\u2502 1686 \u2502 # Download and prepare data \u2502\r\n\u2502 \u2771 1687 \u2502 builder_instance.download_and_prepare( \u2502\r\n\u2502 1688 \u2502 \u2502 download_config=download_config, \u2502\r\n\u2502 1689 \u2502 \u2502 download_mode=download_mode, \u2502\r\n\u2502 1690 \u2502 \u2502 ignore_verifications=ignore_verifications, \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py:605 in \u2502\r\n\u2502 download_and_prepare \u2502\r\n\u2502 \u2502\r\n\u2502 602 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 except ConnectionError: \u2502\r\n\u2502 603 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 logger.warning(\"HF google storage unreachable. Downloa \u2502\r\n\u2502 604 \u2502 \u2502 \u2502 \u2502 \u2502 if not downloaded_from_gcs: \u2502\r\n\u2502 \u2771 605 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 self._download_and_prepare( \u2502\r\n\u2502 606 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 dl_manager=dl_manager, verify_infos=verify_infos, **do \u2502\r\n\u2502 607 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 ) \u2502\r\n\u2502 608 \u2502 \u2502 \u2502 \u2502 \u2502 # Sync info \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py:694 in \u2502\r\n\u2502 _download_and_prepare \u2502\r\n\u2502 \u2502\r\n\u2502 691 \u2502 \u2502 \u2502 \u2502\r\n\u2502 692 \u2502 \u2502 \u2502 try: \u2502\r\n\u2502 693 \u2502 \u2502 \u2502 \u2502 # Prepare split will record examples associated to the split \u2502\r\n\u2502 \u2771 694 \u2502 \u2502 \u2502 \u2502 self._prepare_split(split_generator, **prepare_split_kwargs) \u2502\r\n\u2502 695 \u2502 \u2502 \u2502 except OSError as e: \u2502\r\n\u2502 696 \u2502 \u2502 \u2502 \u2502 raise OSError( \u2502\r\n\u2502 697 \u2502 \u2502 \u2502 \u2502 \u2502 \"Cannot find data file. \" \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py:1151 in \u2502\r\n\u2502 _prepare_split \u2502\r\n\u2502 \u2502\r\n\u2502 1148 \u2502 \u2502 \u2502\r\n\u2502 1149 \u2502 \u2502 generator = self._generate_tables(**split_generator.gen_kwargs) \u2502\r\n\u2502 1150 \u2502 \u2502 with ArrowWriter(features=self.info.features, path=fpath) as writer: \u2502\r\n\u2502 \u2771 1151 \u2502 \u2502 \u2502 for key, table in logging.tqdm( \u2502\r\n\u2502 1152 \u2502 \u2502 \u2502 \u2502 generator, unit=\" tables\", leave=False, disable=True # not loggin \u2502\r\n\u2502 1153 \u2502 \u2502 \u2502 ): \u2502\r\n\u2502 1154 \u2502 \u2502 \u2502 \u2502 writer.write_table(table) \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/site-packages\/tqdm\/notebook.py:257 in \u2502\r\n\u2502 __iter__ \u2502\r\n\u2502 \u2502\r\n\u2502 254 \u2502 \u2502\r\n\u2502 255 \u2502 def __iter__(self): \u2502\r\n\u2502 256 \u2502 \u2502 try: \u2502\r\n\u2502 \u2771 257 \u2502 \u2502 \u2502 for obj in super(tqdm_notebook, self).__iter__(): \u2502\r\n\u2502 258 \u2502 \u2502 \u2502 \u2502 # return super(tqdm...) will not catch exception \u2502\r\n\u2502 259 \u2502 \u2502 \u2502 \u2502 yield obj \u2502\r\n\u2502 260 \u2502 \u2502 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/site-packages\/tqdm\/std.py:1183 in \u2502\r\n\u2502 __iter__ \u2502\r\n\u2502 \u2502\r\n\u2502 1180 \u2502 \u2502 # If the bar is disabled, then just walk the iterable \u2502\r\n\u2502 1181 \u2502 \u2502 # (note: keep this check outside the loop for performance) \u2502\r\n\u2502 1182 \u2502 \u2502 if self.disable: \u2502\r\n\u2502 \u2771 1183 \u2502 \u2502 \u2502 for obj in iterable: \u2502\r\n\u2502 1184 \u2502 \u2502 \u2502 \u2502 yield obj \u2502\r\n\u2502 1185 \u2502 \u2502 \u2502 return \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/packaged_modules\/j \u2502\r\n\u2502 son\/json.py:90 in _generate_tables \u2502\r\n\u2502 \u2502\r\n\u2502 87 \u2502 \u2502 \u2502 # If the file is one json object and if we need to look at the list of \u2502\r\n\u2502 88 \u2502 \u2502 \u2502 if self.config.field is not None: \u2502\r\n\u2502 89 \u2502 \u2502 \u2502 \u2502 with open(file, encoding=\"utf-8\") as f: \u2502\r\n\u2502 \u2771 90 \u2502 \u2502 \u2502 \u2502 \u2502 dataset = json.load(f) \u2502\r\n\u2502 91 \u2502 \u2502 \u2502 \u2502 \u2502\r\n\u2502 92 \u2502 \u2502 \u2502 \u2502 # We keep only the field we are interested in \u2502\r\n\u2502 93 \u2502 \u2502 \u2502 \u2502 dataset = dataset[self.config.field] \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/json\/__init__.py:293 in load \u2502\r\n\u2502 \u2502\r\n\u2502 290 \u2502 To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` \u2502\r\n\u2502 291 \u2502 kwarg; otherwise ``JSONDecoder`` is used. \u2502\r\n\u2502 292 \u2502 \"\"\" \u2502\r\n\u2502 \u2771 293 \u2502 return loads(fp.read(), \u2502\r\n\u2502 294 \u2502 \u2502 cls=cls, object_hook=object_hook, \u2502\r\n\u2502 295 \u2502 \u2502 parse_float=parse_float, parse_int=parse_int, \u2502\r\n\u2502 296 \u2502 \u2502 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/json\/__init__.py:357 in loads \u2502\r\n\u2502 \u2502\r\n\u2502 354 \u2502 if (cls is None and object_hook is None and \u2502\r\n\u2502 355 \u2502 \u2502 \u2502 parse_int is None and parse_float is None and \u2502\r\n\u2502 356 \u2502 \u2502 \u2502 parse_constant is None and object_pairs_hook is None and not kw): \u2502\r\n\u2502 \u2771 357 \u2502 \u2502 return _default_decoder.decode(s) \u2502\r\n\u2502 358 \u2502 if cls is None: \u2502\r\n\u2502 359 \u2502 \u2502 cls = JSONDecoder \u2502\r\n\u2502 360 \u2502 if object_hook is not None: \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/json\/decoder.py:337 in decode \u2502\r\n\u2502 \u2502\r\n\u2502 334 \u2502 \u2502 containing a JSON document). \u2502\r\n\u2502 335 \u2502 \u2502 \u2502\r\n\u2502 336 \u2502 \u2502 \"\"\" \u2502\r\n\u2502 \u2771 337 \u2502 \u2502 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) \u2502\r\n\u2502 338 \u2502 \u2502 end = _w(s, end).end() \u2502\r\n\u2502 339 \u2502 \u2502 if end != len(s): \u2502\r\n\u2502 340 \u2502 \u2502 \u2502 raise JSONDecodeError(\"Extra data\", s, end) \u2502\r\n\u2502 \u2502\r\n\u2502 \/home\/tiankang\/software\/anaconda3\/lib\/python3.8\/json\/decoder.py:353 in raw_decode \u2502\r\n\u2502 \u2502\r\n\u2502 350 \u2502 \u2502 \u2502\r\n\u2502 351 \u2502 \u2502 \"\"\" \u2502\r\n\u2502 352 \u2502 \u2502 try: \u2502\r\n\u2502 \u2771 353 \u2502 \u2502 \u2502 obj, end = self.scan_once(s, idx) \u2502\r\n\u2502 354 \u2502 \u2502 except StopIteration as err: \u2502\r\n\u2502 355 \u2502 \u2502 \u2502 raise JSONDecodeError(\"Expecting value\", s, err.value) from None \u2502\r\n\u2502 356 \u2502 \u2502 return obj, end \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nJSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051)\r\n```\r\n\r\nHowever, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well.\r\n\r\n## Environment info\r\n```\r\n- `datasets` version: 2.1.0\r\n- Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4306\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4306\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4305","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4305\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4305\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4305\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4305","id":1231099934,"node_id":"PR_kwDODunzps43kt4P","number":4305,"title":"Fixes FrugalScore","user":{"login":"moussaKam","id":28675016,"node_id":"MDQ6VXNlcjI4Njc1MDE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28675016?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/moussaKam","html_url":"https:\/\/github.com\/moussaKam","followers_url":"https:\/\/api.github.com\/users\/moussaKam\/followers","following_url":"https:\/\/api.github.com\/users\/moussaKam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/moussaKam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/moussaKam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/moussaKam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/moussaKam\/orgs","repos_url":"https:\/\/api.github.com\/users\/moussaKam\/repos","events_url":"https:\/\/api.github.com\/users\/moussaKam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/moussaKam\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4305). All of your documentation changes will be reflected on that endpoint.","> predictions and references are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.\r\n\r\nWhat is the order of magnitude of the difference ? Do you know what causes this ?\r\n\r\n> I switched to dynamic padding that was was used in the training, forcing the padding to max_length introduces errors for some reason that I ignore.\r\n\r\nWhat error ?"],"created_at":1652186646000,"updated_at":1657120793000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"There are two minor modifications in this PR:\r\n1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.\r\n2) I switched to dynamic padding that was was used in the training, forcing the padding to `max_length` introduces errors for some reason that I ignore.\r\n\r\n@lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4305\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4305\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4305","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4305","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4305.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4305.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4304","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4304\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4304\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4304\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4304","id":1231047051,"node_id":"I_kwDODunzps5JYEmL","number":4304,"title":"Language code search does direct matches","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting ! I forwarded the issue to the front-end team :)\r\n\r\nWill keep you posted !\r\n\r\nI also changed the tagging app to suggest two letters code for now."],"created_at":1652183956000,"updated_at":1652186322000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nHi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https:\/\/huggingface.co\/spaces\/huggingface\/datasets-tagging) encourages addition of the additional codes (\"_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_\") but this would lead to those datasets being hidden in datasets search.\r\n\r\n## Steps to reproduce the bug\r\n1. Add a dataset using a variant tag (e.g. [`sq-AL`](https:\/\/huggingface.co\/datasets?languages=languages:sq-AL))\r\n2. Look for datasets using the full code \r\n3. Note that they're missing when just the language is searched for (e.g. [`sq`](https:\/\/huggingface.co\/datasets?languages=languages:sq))\r\n\r\nSome datasets are already affected by this - e.g. `AmazonScience\/massive` is listed under `sq-AL` but not `sq`.\r\n\r\nOne workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :)\r\n\r\n## Expected results\r\nDatasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`).\r\n\r\n## Actual results\r\nThe language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches.\r\n\r\n## Environment info\r\n(web app)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4304\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4304\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4303","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4303\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4303\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4303\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4303","id":1230867728,"node_id":"PR_kwDODunzps43j8cH","number":4303,"title":"Fix: Add missing comma","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The CI failure is unrelated to this PR and fixed on master, merging :)"],"created_at":1652174498000,"updated_at":1652259015000,"closed_at":1652259014000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4303\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4303\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4303","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4303","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4303.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4303.patch","merged_at":1652259014000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4302","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4302\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4302\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4302\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4302","id":1230651117,"node_id":"PR_kwDODunzps43jPE5","number":4302,"title":"Remove hacking license tags when mirroring datasets on the Hub","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The Hub doesn't allow these characters in the YAML tags, and git push fails if you want to push a dataset card containing these characters.","Ok, let me rename the bad config names :) I think I can also keep backward compatibility with a warning","Almost done with it btw, will submit a PR that shows all the configuration name changes (from a bit more than 20 datasets)","Please, let me know when the renaming of configs is done. If not enough bandwidth, I can take care of it...","Will focus on this this afternoon ;)","I realized when renaming all the configurations with dots in https:\/\/github.com\/huggingface\/datasets\/pull\/4365 that it's not ideal for certain cases. For example:\r\n- many configurations have a version like \"1.0.0\" in their names\r\n- to avoid breaking changes we need to replace dots with underscores in the user input and show a warning, which hurts the experience\r\n- our second most downloaded dataset at the moment is affected: `newsgroup`\r\n- if we disallow dots, then we'll never be able to make the [allenai\/c4](https:\/\/huggingface.co\/datasets\/allenai\/c4) work with its different configurations since they contain dots, and we can't rename them because they are the official download links\r\n\r\nI was thinking of other alternatives:\r\n1. just stop separating tags per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway\r\n2. use another YAML structure to avoid having config names as keys, such as\r\n```yaml\r\nlanguages:\r\n- config: 20220301_en\r\n values:\r\n - en\r\n```\r\n\r\nI'm down for 1, to keep things simple","@lhoestq I agree:\r\n- better not changing config names (so that we do not introduce any braking change)\r\n- therefore, we should not use them as keys\r\n\r\nIn relation with the proposed solutions, I have no strong opinion:\r\n- option 1 is simpler and aligns better with current usage on the Hub (configs are ignored)\r\n- however:\r\n - we will lose all the information per config we already have (for those datasets containing config keys; contributors made an effort to put that information per config)\r\n - and this information might be useful on the Hub in the future, in case we would like to enrich the search feature with more granularity; this is only applicable if this feature could eventually make sense\r\n\r\nSo, no strong opinion...","Closing in favor of https:\/\/github.com\/huggingface\/datasets\/pull\/4367"],"created_at":1652161966000,"updated_at":1653040110000,"closed_at":1653039620000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters \".\" and \"$\". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub. \r\n\r\nI guess this hacking is no longer necessary:\r\n- it is not applied to community datasets\r\n- all canonical datasets are validated by maintainers before being merged: CI + maintainers make sure license tags are the right ones\r\n\r\nFix #4298.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4302\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4302\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4302","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4302","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4302.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4302.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4301","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4301\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4301\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4301\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4301","id":1230401256,"node_id":"PR_kwDODunzps43idlE","number":4301,"title":"Add ImageNet-Sketch dataset","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I think you can go ahead with uploading the data, and also ping the author in parallel. I think the images may subject to copyright anyway (scrapped from google image) so the dataset author is not allowed to set a license to the data.\r\n\r\nI think it's fine to upload the dataset as soon as we mention explicitly that the images may be subject to copyright."],"created_at":1652139525000,"updated_at":1653329654000,"closed_at":1653329129000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds the ImageNet-Sketch dataset and resolves #3953 .","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4301\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4301\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4301","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4301","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4301.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4301.patch","merged_at":1653329129000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4300","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4300\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4300\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4300\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4300","id":1230272761,"node_id":"PR_kwDODunzps43iA86","number":4300,"title":"Add API code examples for loading methods","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652131826000,"updated_at":1653495795000,"closed_at":1653470413000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :)\r\n\r\nI was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me:\r\n\r\n```py\r\nfrom datasets import inspect_dataset\r\ninspect_dataset('rotten_tomatoes', local_path='\/content\/rotten_tomatoes')\r\n\r\nFileNotFoundError: Couldn't find a dataset script at \/content\/rotten_tomatoes\/rotten_tomatoes.py or any data file in the same directory.\r\n```\r\n\r\nDoes the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4300\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4300\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4300","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4300","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4300.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4300.patch","merged_at":1653470412000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4299","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4299\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4299\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4299\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4299","id":1230236782,"node_id":"PR_kwDODunzps43h5RP","number":4299,"title":"Remove manual download from imagenet-1k","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks for the reviews @apsdehal and @lhoestq! As suggested by @lhoestq, I'll separate the train\/val\/test splits, apply the validation split fixes and shuffle the images files to simplify the script and make streaming faster.","@apsdehal I dismissed your review as it's no longer relevant after the data files changes suggested by @lhoestq. "],"created_at":1652129358000,"updated_at":1653490499000,"closed_at":1653489976000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Remove the manual download code from `imagenet-1k` to make it a regular dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4299\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4299\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4299","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4299","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4299.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4299.patch","merged_at":1653489976000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4298","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4298\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4298\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4298\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4298","id":1229748006,"node_id":"I_kwDODunzps5JTHcm","number":4298,"title":"Normalise license names","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["we'll add the same server-side metadata validation system as for hf.co\/models soon-ish\r\n\r\n(you can check on hf.co\/models that licenses are \"clean\")","Fixed by #4367."],"created_at":1652104292000,"updated_at":1653040310000,"closed_at":1653040310000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nWhen browsing datasets, the Licenses tag cloud (bottom left of e.g. https:\/\/huggingface.co\/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the dupes is probably due to a bit of variation in metadata.\r\n\r\n**Describe the solution you'd like**\r\nI'd like the licenses in metadata to follow the same standard as much as possible, to remove this problem. I'd like to go ahead and normalise the dataset metadata to follow the format & values given in [src\/datasets\/utils\/resources\/licenses.json](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/utils\/resources\/licenses.json) .\r\n\r\n**Describe alternatives you've considered**\r\nNone\r\n\r\n**Additional context**\r\nNone\r\n\r\n**Priority** \r\nLow\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4298\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4298\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4297","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4297\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4297\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4297\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4297","id":1229735498,"node_id":"I_kwDODunzps5JTEZK","number":4297,"title":"Datasets YAML tagging space is down","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess","Thanks for reporting, fixing it now","It's up again :)"],"created_at":1652103905000,"updated_at":1652107465000,"closed_at":1652107465000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nThe neat hf spaces app for generating YAML tags for dataset `README.md`s is down\r\n\r\n## Steps to reproduce the bug\r\n1. Visit https:\/\/huggingface.co\/spaces\/huggingface\/datasets-tagging\r\n\r\n## Expected results\r\nThere'll be a HF spaces web app for generating dataset metadata YAML\r\n\r\n## Actual results\r\nThere's an error message; here's the step where it breaks:\r\n\r\n```\r\nStep 18\/29 : RUN pip install -r requirements.txt\r\n ---> Running in e88bfe7e7e0c\r\nDefaulting to user installation because normal site-packages is not writeable\r\nCollecting git+https:\/\/github.com\/huggingface\/datasets.git@update-task-list (from -r requirements.txt (line 4))\r\n Cloning https:\/\/github.com\/huggingface\/datasets.git (to revision update-task-list) to \/tmp\/pip-req-build-bm8t0r0k\r\n Running command git clone --filter=blob:none --quiet https:\/\/github.com\/huggingface\/datasets.git \/tmp\/pip-req-build-bm8t0r0k\r\n WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref.\r\n Running command git checkout -q update-task-list\r\n error: pathspec 'update-task-list' did not match any file(s) known to git\r\n error: subprocess-exited-with-error\r\n \r\n \u00d7 git checkout -q update-task-list did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> See above for output.\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\nerror: subprocess-exited-with-error\r\n\r\n\u00d7 git checkout -q update-task-list did not run successfully.\r\n\u2502 exit code: 1\r\n\u2570\u2500> See above for output.\r\n```\r\n\r\n## Environment info\r\n\r\n- Platform: Linux \/ Brave\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4297\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4297\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4296","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4296\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4296\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4296\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4296","id":1229554645,"node_id":"PR_kwDODunzps43foZ-","number":4296,"title":"Fix URL query parameters in compression hop path when streaming","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4296). All of your documentation changes will be reflected on that endpoint."],"created_at":1652095102000,"updated_at":1657120793000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix #3488.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4296\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4296\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4296","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4296","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4296.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4296.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4295","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4295\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4295\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4295\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4295","id":1229527283,"node_id":"PR_kwDODunzps43fieR","number":4295,"title":"Fix missing lz4 dependency for tests","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652093600000,"updated_at":1652095282000,"closed_at":1652094824000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4295\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4295\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4295","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4295","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4295.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4295.patch","merged_at":1652094824000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4294","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4294\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4294\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4294\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4294","id":1229455582,"node_id":"PR_kwDODunzps43fTXA","number":4294,"title":"Fix CLI run_beam save_infos","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1652089663000,"updated_at":1652166244000,"closed_at":1652165770000,"author_association":"MEMBER","active_lock_reason":null,"body":"Currently, it raises TypeError:\r\n```\r\nTypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos'\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4294\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4294\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4294","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4294","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4294.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4294.patch","merged_at":1652165770000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4293","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4293\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4293\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4293\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4293","id":1228815477,"node_id":"PR_kwDODunzps43dRt9","number":4293,"title":"Fix wrong map parameter name in cache docs","user":{"login":"h4iku","id":3812788,"node_id":"MDQ6VXNlcjM4MTI3ODg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3812788?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/h4iku","html_url":"https:\/\/github.com\/h4iku","followers_url":"https:\/\/api.github.com\/users\/h4iku\/followers","following_url":"https:\/\/api.github.com\/users\/h4iku\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/h4iku\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/h4iku\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/h4iku\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/h4iku\/orgs","repos_url":"https:\/\/api.github.com\/users\/h4iku\/repos","events_url":"https:\/\/api.github.com\/users\/h4iku\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/h4iku\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651994866000,"updated_at":1655225340000,"closed_at":1655222820000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The `load_from_cache` parameter of `map` should be `load_from_cache_file`.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4293\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4293\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4293","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4293","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4293.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4293.patch","merged_at":1655222820000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4292","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4292\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4292\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4292\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4292","id":1228216788,"node_id":"PR_kwDODunzps43bhrp","number":4292,"title":"Add API code examples for remaining main classes","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651860931000,"updated_at":1653501913000,"closed_at":1653501396000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4292\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4292\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4292","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4292","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4292.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4292.patch","merged_at":1653501396000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4291","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4291\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4291\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4291\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4291","id":1227777500,"node_id":"I_kwDODunzps5JLmXc","number":4291,"title":"Dataset Viewer issue for strombergnlp\/ipm_nel : preview is empty, no error message","user":{"login":"leondz","id":121934,"node_id":"MDQ6VXNlcjEyMTkzNA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/121934?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leondz","html_url":"https:\/\/github.com\/leondz","followers_url":"https:\/\/api.github.com\/users\/leondz\/followers","following_url":"https:\/\/api.github.com\/users\/leondz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leondz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leondz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leondz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leondz\/orgs","repos_url":"https:\/\/api.github.com\/users\/leondz\/repos","events_url":"https:\/\/api.github.com\/users\/leondz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leondz\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in your case, that is due to the data file being TAR. This format is not streamable out of the box (it does not allow random access to the archived files), but we use a trick to allow streaming: using `dl_manager.iter_archive`.\r\n\r\nLet me know if you need some help: I could push a commit to your repo with the fix.","Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :)"],"created_at":1651838607000,"updated_at":1652084758000,"closed_at":1652084758000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"### Link\n\nhttps:\/\/huggingface.co\/datasets\/strombergnlp\/ipm_nel\/viewer\/ipm_nel\/train\n\n### Description\n\nThe viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?\n\n### Owner\n\nYes","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4291\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4291\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4290","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4290\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4290\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4290\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4290","id":1227592826,"node_id":"PR_kwDODunzps43Zr08","number":4290,"title":"Update README.md","user":{"login":"monk1337","id":17107749,"node_id":"MDQ6VXNlcjE3MTA3NzQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17107749?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/monk1337","html_url":"https:\/\/github.com\/monk1337","followers_url":"https:\/\/api.github.com\/users\/monk1337\/followers","following_url":"https:\/\/api.github.com\/users\/monk1337\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/monk1337\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/monk1337\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/monk1337\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/monk1337\/orgs","repos_url":"https:\/\/api.github.com\/users\/monk1337\/repos","events_url":"https:\/\/api.github.com\/users\/monk1337\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/monk1337\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4290). All of your documentation changes will be reflected on that endpoint.","@albertvillanova Kindly check :)"],"created_at":1651827171000,"updated_at":1657120793000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Updating readme in medmcqa dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4290\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4290\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4290","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4290","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4290.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4290.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4288","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4288\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4288\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4288\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4288","id":1226821732,"node_id":"PR_kwDODunzps43XLKi","number":4288,"title":"Add missing `faiss` import to fix https:\/\/github.com\/huggingface\/datasets\/issues\/4287","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1651764109000,"updated_at":1652187306000,"closed_at":1652184588000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR fixes the issue recently mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/4287 \ud83e\udd17 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4288\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4288\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4288","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4288","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4288.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4288.patch","merged_at":1652184588000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4287","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4287\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4287\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4287\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4287","id":1226806652,"node_id":"I_kwDODunzps5JH5V8","number":4287,"title":"\"NameError: name 'faiss' is not defined\" on `.add_faiss_index` when `device` is not None","user":{"login":"alvarobartt","id":36760800,"node_id":"MDQ6VXNlcjM2NzYwODAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36760800?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alvarobartt","html_url":"https:\/\/github.com\/alvarobartt","followers_url":"https:\/\/api.github.com\/users\/alvarobartt\/followers","following_url":"https:\/\/api.github.com\/users\/alvarobartt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alvarobartt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alvarobartt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alvarobartt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alvarobartt\/orgs","repos_url":"https:\/\/api.github.com\/users\/alvarobartt\/repos","events_url":"https:\/\/api.github.com\/users\/alvarobartt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alvarobartt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https:\/\/github.com\/huggingface\/datasets\/blob\/f51b6994db27ea69261ef919fb7775928f9ec10b\/src\/datasets\/search.py#L305, triggered from https:\/\/github.com\/huggingface\/datasets\/blob\/f51b6994db27ea69261ef919fb7775928f9ec10b\/src\/datasets\/search.py#L249 when trying to `ds_with_embeddings.add_faiss_index(column='embeddings', device=0)` with the code above.\r\n\r\nAs it seems that the `@staticmethod` doesn't recognize the `import faiss` defined in https:\/\/github.com\/huggingface\/datasets\/blob\/f51b6994db27ea69261ef919fb7775928f9ec10b\/src\/datasets\/search.py#L261, so whenever the value of `device` is not None in https:\/\/github.com\/huggingface\/datasets\/blob\/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66\/src\/datasets\/search.py#L438, that exception is triggered.\r\n\r\nSo on, adding `import faiss` inside https:\/\/github.com\/huggingface\/datasets\/blob\/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66\/src\/datasets\/search.py#L305 right after the check of `device`'s value, solves the issue and lets you calculate the indices in GPU.\r\n\r\nI'll add the code in a PR linked to this issue in case you want to merge it!","Adding here the complete error traceback!\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/alvarobartt\/lol.py\", line 12, in \r\n ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`\r\n File \"\/home\/alvarobartt\/.local\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 3656, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"\/home\/alvarobartt\/.local\/lib\/python3.9\/site-packages\/datasets\/search.py\", line 478, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=True)\r\n File \"\/home\/alvarobartt\/.local\/lib\/python3.9\/site-packages\/datasets\/search.py\", line 281, in add_vectors\r\n self.faiss_index = self._faiss_index_to_device(index, self.device)\r\n File \"\/home\/alvarobartt\/.local\/lib\/python3.9\/site-packages\/datasets\/search.py\", line 327, in _faiss_index_to_device\r\n faiss_res = faiss.StandardGpuResources()\r\nNameError: name 'faiss' is not defined\r\n```","Closed as https:\/\/github.com\/huggingface\/datasets\/pull\/4288 already merged! :hugs:"],"created_at":1651763385000,"updated_at":1652190799000,"closed_at":1652190799000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nWhen using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.\r\n\r\nAll that assuming that `datasets` is properly installed and `faiss-gpu` too, as well as all the CUDA drivers required.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\n# Sample code to reproduce the bug\r\nfrom transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\nimport torch\r\ntorch.set_grad_enabled(False)\r\nctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\nctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\")\r\n\r\nfrom datasets import load_dataset\r\nds = load_dataset('crime_and_punish', split='train[:100]')\r\nds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example[\"line\"], return_tensors=\"pt\"))[0][0].numpy()})\r\n\r\nds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`\r\n```\r\n\r\n## Expected results\r\n\r\nA new column named `embeddings` in the dataset that we're adding the index to.\r\n\r\n## Actual results\r\n\r\nAn exception is triggered with the following message `NameError: name 'faiss' is not defined`.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.1.0\r\n- Platform: Linux-5.13.0-1022-azure-x86_64-with-glibc2.31\r\n- Python version: 3.9.12\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4287\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4287\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4286","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4286\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4286\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4286\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4286","id":1226758621,"node_id":"PR_kwDODunzps43W-DI","number":4286,"title":"Add Lahnda language tag","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651761260000,"updated_at":1652184604000,"closed_at":1652184158000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This language is present in [Wikimedia's WIT](https:\/\/huggingface.co\/datasets\/wikimedia\/wit_base) dataset.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4286\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4286\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4286","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4286","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4286.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4286.patch","merged_at":1652184157000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4285","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4285\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4285\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4285\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4285","id":1226374831,"node_id":"PR_kwDODunzps43VtEa","number":4285,"title":"Update LexGLUE README.md","user":{"login":"iliaschalkidis","id":1626984,"node_id":"MDQ6VXNlcjE2MjY5ODQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1626984?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliaschalkidis","html_url":"https:\/\/github.com\/iliaschalkidis","followers_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/followers","following_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/repos","events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651739810000,"updated_at":1651757944000,"closed_at":1651757615000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Update the leaderboard based on the latest results presented in the ACL 2022 version of the article.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4285\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4285\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4285","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4285","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4285.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4285.patch","merged_at":1651757615000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4284","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4284\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4284\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4284\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4284","id":1226200727,"node_id":"I_kwDODunzps5JFlaX","number":4284,"title":"Issues in processing very large datasets","user":{"login":"sajastu","id":10419055,"node_id":"MDQ6VXNlcjEwNDE5MDU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10419055?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sajastu","html_url":"https:\/\/github.com\/sajastu","followers_url":"https:\/\/api.github.com\/users\/sajastu\/followers","following_url":"https:\/\/api.github.com\/users\/sajastu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sajastu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sajastu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sajastu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sajastu\/orgs","repos_url":"https:\/\/api.github.com\/users\/sajastu\/repos","events_url":"https:\/\/api.github.com\/users\/sajastu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sajastu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! `datasets` doesn't load the dataset in memory. Instead it uses memory mapping to load your dataset from your disk (it is stored as arrow files). Do you know at what point you have RAM issues exactly ?\r\n\r\nHow big are your graph_data_train dictionaries btw ?"],"created_at":1651726869000,"updated_at":1652184923000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI'm trying to add a feature called \"subgraph\" to CNN\/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Cannot allocate memory]` appears. I suppose this problem roots in RAM issues and how the dataset is loaded during training, but I have no clue of what I can do to fix it. Observing the dataset's cache directory, I see that it takes ~600GB of memory and that's why I believe special care is needed when loading it into the memory. \r\n\r\n\r\nHere are my modifications to `run_summarization.py` code. \r\n\r\n\r\n```\r\n# loading pre-computed dictionary where keys are 'id' of article and values are corresponding subgraph\r\ngraph_data_train = get_graph_data('train') \r\ngraph_data_validation = get_graph_data('val')\r\n...\r\n...\r\n\r\n\r\nwith training_args.main_process_first(desc=\"train dataset map pre-processing\"):\r\n train_dataset = train_dataset.map(\r\n preprocess_function_train,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on train dataset\",\r\n )\r\n\r\n```\r\n\r\n\r\nAnd here is the modified preprocessed function:\r\n\r\n```\r\ndef preprocess_function_train(examples):\r\n inputs, targets, sub_graphs, ids = [], [], [], []\r\n for i in range(len(examples[text_column])):\r\n if examples[text_column][i] is not None and examples[summary_column][i] is not None:\r\n # if examples['doc_id'][i] in graph_data.keys():\r\n inputs.append(examples[text_column][i])\r\n targets.append(examples[summary_column][i])\r\n sub_graphs.append(graph_data_train[examples['id'][i]])\r\n ids.append(examples['id'][i])\r\n\r\n inputs = [prefix + inp for inp in inputs]\r\n model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True,\r\n sub_graphs=sub_graphs, ids=ids)\r\n\r\n # Setup the tokenizer for targets\r\n with tokenizer.as_target_tokenizer():\r\n labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True)\r\n\r\n # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore\r\n # padding in the loss.\r\n if padding == \"max_length\" and data_args.ignore_pad_token_for_loss:\r\n labels[\"input_ids\"] = [\r\n [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels[\"input_ids\"]\r\n ]\r\n\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n```\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.1.0\r\n- Platform: Linux Ubuntu\r\n- Python version: 3.6\r\n- PyArrow version: 6.0.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4284\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4284\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4283","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4283\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4283\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4283\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4283","id":1225686988,"node_id":"PR_kwDODunzps43Tnxo","number":4283,"title":"Fix filesystem docstring","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651686162000,"updated_at":1651854722000,"closed_at":1651818137000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR untangles the `S3FileSystem` docstring so the [parameters](https:\/\/huggingface.co\/docs\/datasets\/master\/en\/package_reference\/main_classes#parameters) are properly displayed.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4283\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4283\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4283","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4283","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4283.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4283.patch","merged_at":1651818137000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4282","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4282\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4282\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4282\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4282","id":1225616545,"node_id":"PR_kwDODunzps43TZYL","number":4282,"title":"Don't do unnecessary list type casting to avoid replacing None values by empty lists","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Quick question about the message in the warning. You say \"will be fixed in a future major version\" but don't you mean \"will raise an error in a future major version\"?","Right ! Good catch, thanks, I updated the message to say \"will raise an error in a future major version\""],"created_at":1651682221000,"updated_at":1651833838000,"closed_at":1651833420000,"author_association":"MEMBER","active_lock_reason":null,"body":"In certain cases, `None` values are replaced by empty lists when casting feature types.\r\n\r\nIt happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https:\/\/github.com\/huggingface\/datasets\/issues\/3676\r\n\r\nThis issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https:\/\/github.com\/huggingface\/datasets\/issues\/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type.\r\n\r\nIn this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary.\r\n\r\nI also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so\r\n\r\nThis PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`:\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({\"a\": range(4)})\r\nds = ds.map(lambda x: {\"b\": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=[\"a\"])\r\nprint(ds.to_pandas())\r\n# before:\r\n# b\r\n# 0 [None, [0]]\r\n# 1 [[], [0]]\r\n# 2 [[], [0]]\r\n# 3 [[], [0]]\r\n#\r\n# now:\r\n# b\r\n# 0 [None, [0]]\r\n# 1 [None, [0]]\r\n# 2 [None, [0]]\r\n# 3 [None, [0]]\r\n```\r\n\r\ncc @sgugger ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4282\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4282\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4282","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4282","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4282.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4282.patch","merged_at":1651833420000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4281","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4281\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4281\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4281\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4281","id":1225556939,"node_id":"PR_kwDODunzps43TNBm","number":4281,"title":"Remove a copy-paste sentence in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The non-passing tests have nothing to do with this PR."],"created_at":1651678915000,"updated_at":1651826283000,"closed_at":1651689196000,"author_association":"MEMBER","active_lock_reason":null,"body":"Remove the following copy-paste sentence from dataset cards:\r\n```\r\nWe show detailed information for up to 5 configurations of the dataset.\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4281\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4281\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4281","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4281","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4281.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4281.patch","merged_at":1651689196000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4280","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4280\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4280\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4280\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4280","id":1225446844,"node_id":"PR_kwDODunzps43S2xg","number":4280,"title":"Add missing features to commonsense_qa dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@albertvillanova it adds question_concept and id which is great. I suppose we'll talk about staying true to the format on another PR. ","Yes, let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the dataset feature structure."],"created_at":1651674266000,"updated_at":1651847037000,"closed_at":1651846606000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix partially #4275.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4280\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4280\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4280","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4280","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4280.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4280.patch","merged_at":1651846606000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4279","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4279\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4279\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4279\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4279","id":1225300273,"node_id":"PR_kwDODunzps43SXw5","number":4279,"title":"Update minimal PyArrow version warning","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651667169000,"updated_at":1651740658000,"closed_at":1651740227000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Update the minimal PyArrow version warning (should've been part of #4250). ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4279\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4279\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4279","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4279","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4279.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4279.patch","merged_at":1651740227000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4278","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4278\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4278\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4278\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4278","id":1225122123,"node_id":"PR_kwDODunzps43RyTs","number":4278,"title":"Add missing features to openbookqa dataset for additional config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the data feature structure."],"created_at":1651656170000,"updated_at":1651842800000,"closed_at":1651842361000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix partially #4276.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4278\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4278\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4278","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4278","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4278.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4278.patch","merged_at":1651842361000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4277","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4277\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4277\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4277\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4277","id":1225002286,"node_id":"PR_kwDODunzps43RZV9","number":4277,"title":"Enable label alignment for token classification datasets","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hmm, not sure why the Windows tests are failing with:\r\n\r\n```\r\nDid not find path entry C:\\tools\\miniconda3\\bin\r\nC:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n```\r\n\r\nEdit: running the CI again fixed the problem \ud83d\ude43 ","> One last nit and we can merge then\r\n\r\nThanks, done!"],"created_at":1651648516000,"updated_at":1651851735000,"closed_at":1651851391000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER).\r\n\r\nExample of usage:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nner_ds = load_dataset(\"conll2003\", split=\"train\")\r\n# returns [3, 0, 7, 0, 0, 0, 7, 0, 0]\r\nner_ds[0][\"ner_tags\"]\r\n# hypothetical model mapping with O <--> B-LOC\r\nlabel2id = {\r\n \"B-LOC\": \"0\",\r\n \"B-MISC\": \"7\",\r\n \"B-ORG\": \"3\",\r\n \"B-PER\": \"1\",\r\n \"I-LOC\": \"6\",\r\n \"I-MISC\": \"8\",\r\n \"I-ORG\": \"4\",\r\n \"I-PER\": \"2\",\r\n \"O\": \"5\"\r\n }\r\nner_aligned_ds = ner_ds.align_labels_with_mapping(label2id, \"ner_tags\")\r\n# returns [3, 5, 7, 5, 5, 5, 7, 5, 5]\r\nner_aligned_ds[0][\"ner_tags\"]\r\n```\r\n\r\nContext: we need this in AutoTrain to automatically align datasets \/ models during evaluation. cc @abhishekkrthakur ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4277\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4277\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4277","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4277","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4277.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4277.patch","merged_at":1651851391000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4276","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4276\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4276\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4276\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4276","id":1224949252,"node_id":"I_kwDODunzps5JAz4E","number":4276,"title":"OpenBookQA has missing and inconsistent field names","user":{"login":"vblagoje","id":458335,"node_id":"MDQ6VXNlcjQ1ODMzNQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/458335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vblagoje","html_url":"https:\/\/github.com\/vblagoje","followers_url":"https:\/\/api.github.com\/users\/vblagoje\/followers","following_url":"https:\/\/api.github.com\/users\/vblagoje\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vblagoje\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vblagoje\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vblagoje\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vblagoje\/orgs","repos_url":"https:\/\/api.github.com\/users\/vblagoje\/repos","events_url":"https:\/\/api.github.com\/users\/vblagoje\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vblagoje\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ","Ok, awesome @albertvillanova How about #4275 ?","On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.\r\n\r\nFor example, other datasets also flatten \"question.stem\" into \"question\":\r\n- ai2_arc:\r\n ```python\r\n question = data[\"question\"][\"stem\"]\r\n choices = data[\"question\"][\"choices\"]\r\n text_choices = [choice[\"text\"] for choice in choices]\r\n label_choices = [choice[\"label\"] for choice in choices]\r\n yield id_, {\r\n \"id\": id_,\r\n \"answerKey\": answerkey,\r\n \"question\": question,\r\n \"choices\": {\"text\": text_choices, \"label\": label_choices},\r\n }\r\n ```\r\n- commonsense_qa:\r\n ```python\r\n question = data[\"question\"]\r\n stem = question[\"stem\"]\r\n yield id_, {\r\n \"answerKey\": answerkey,\r\n \"question\": stem,\r\n \"choices\": {\"label\": labels, \"text\": texts},\r\n }\r\n ```\r\n- cos_e:\r\n ```python\r\n \"question\": cqa[\"question\"][\"stem\"],\r\n ```\r\n- qasc\r\n- quartz\r\n- wiqa\r\n\r\nExceptions:\r\n- exams\r\n\r\nI think we should agree on a CONVENIENT format for QA and use always CONSISTENTLY the same.","@albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just because we think something makes more sense. I am in that position now (downloading original data rather than using HF Datasets) and undoubtedly it hinders HF Datasets' widespread use and adoption. Missing fields like in the case of #4275 is definitely bad and not even up for a discussion IMHO! cc @lhoestq ","I'm opening a PR that adds the missing fields.\r\n\r\nLet's agree on the feature structure: @lhoestq @mariosasko @polinaeterna ","IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case).","I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibility. Users who relied on the old format will update their code with either the util method for a quick fix or slightly more elaborate for the new. ","I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.\r\n\r\nThere is always the tension between:\r\n- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),\r\n- and on the other hand performing some king of standardization\/harmonization depending on the task (this has the advantage that once learnt, the same structure applies to all datasets; this has been done for e.g. POS tagging: all datasets have been adapted to a certain \"standard\" structure).\r\n - Another advantage: datasets can easily be interchanged (or joined) to be used by the same model\r\n\r\nRecently, in the BigScience BioMedical hackathon, they adopted a different approach:\r\n- they implement a \"source\" config, respecting the original structure as much as possible\r\n- they implement additional config for each task, with a \"standard\" nested structure per task, which is most useful for users.","@albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once all the data is there, and users can create lambda functions to create whatever structure serves them best. "],"created_at":1651643512000,"updated_at":1652013223000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nOpenBookQA implementation is inconsistent with the original dataset.\r\n\r\nWe need to:\r\n\r\n1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.\r\n2. Add missing additional fields:\r\n - 'fact1': row['fact1'],\r\n - 'humanScore': row['humanScore'],\r\n - 'clarity': row['clarity'],\r\n - 'turkIdAnonymized': row['turkIdAnonymized']\r\n3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version.\r\n\r\n## Expected results\r\nThe structure and every data item in the original OpenBookQA matches our OpenBookQA version.\r\n\r\n## Actual results\r\nTBD\r\n\r\n## Environment info\r\n- `datasets` version: 2.1.0\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n- Python version: 3.8.13\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4276\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4276\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4275","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4275\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4275\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4275\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4275","id":1224943414,"node_id":"I_kwDODunzps5JAyc2","number":4275,"title":"CommonSenseQA has missing and inconsistent field names","user":{"login":"vblagoje","id":458335,"node_id":"MDQ6VXNlcjQ1ODMzNQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/458335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vblagoje","html_url":"https:\/\/github.com\/vblagoje","followers_url":"https:\/\/api.github.com\/users\/vblagoje\/followers","following_url":"https:\/\/api.github.com\/users\/vblagoje\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vblagoje\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vblagoje\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vblagoje\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vblagoje\/orgs","repos_url":"https:\/\/api.github.com\/users\/vblagoje\/repos","events_url":"https:\/\/api.github.com\/users\/vblagoje\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vblagoje\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. "],"created_at":1651642739000,"updated_at":1651664478000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nIn short, CommonSenseQA implementation is inconsistent with the original dataset.\r\n\r\nMore precisely, we need to:\r\n\r\n1. Add the dataset matching \"id\" field. The current dataset, instead, regenerates monotonically increasing id. \r\n2. The [\u201cquestion\u201d][\u201cstem\u201d] field is flattened into \"question\". We should match the original dataset and unflatten it\r\n3. Add the missing \"question_concept\" field in the question tree node\r\n4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original\r\n\r\n## Expected results\r\nEvery data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset.\r\n\r\n## Actual results\r\nTBD\r\n\r\n## Environment info\r\n- `datasets` version: 2.1.0\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n- Python version: 3.8.13\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4275\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4275\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4274","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4274\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4274\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4274\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4274","id":1224740303,"node_id":"PR_kwDODunzps43Qm2w","number":4274,"title":"Add API code examples for IterableDataset","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651617857000,"updated_at":1651681772000,"closed_at":1651681324000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4274\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4274\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4274","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4274","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4274.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4274.patch","merged_at":1651681324000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4273","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4273\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4273\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4273\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4273","id":1224681036,"node_id":"PR_kwDODunzps43QaA6","number":4273,"title":"leadboard info added for TNE","user":{"login":"yanaiela","id":8031035,"node_id":"MDQ6VXNlcjgwMzEwMzU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8031035?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yanaiela","html_url":"https:\/\/github.com\/yanaiela","followers_url":"https:\/\/api.github.com\/users\/yanaiela\/followers","following_url":"https:\/\/api.github.com\/users\/yanaiela\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yanaiela\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yanaiela\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yanaiela\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yanaiela\/orgs","repos_url":"https:\/\/api.github.com\/users\/yanaiela\/repos","events_url":"https:\/\/api.github.com\/users\/yanaiela\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yanaiela\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651613741000,"updated_at":1651757124000,"closed_at":1651756693000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4273\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4273\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4273","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4273","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4273.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4273.patch","merged_at":1651756693000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4272","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4272\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4272\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4272\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4272","id":1224635660,"node_id":"PR_kwDODunzps43QQQt","number":4272,"title":"Fix typo in logging docs","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","> This PR fixes #4271.\r\n\r\nThings have not changed when searching \"tqdm\" in the Dataset document. The second result still performs as \"Enable\".","Hi @jiangwy99, the fix will appear on the `main` version of the docs:\r\n\r\n![Screen Shot 2022-05-04 at 8 38 29 AM](https:\/\/user-images.githubusercontent.com\/59462357\/166718225-6848ab91-87d1-4572-9912-40a909af6cb9.png)\r\n","Fixed now, thanks."],"created_at":1651610877000,"updated_at":1651678947000,"closed_at":1651647516000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR fixes #4271.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4272\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4272\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4272","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4272","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4272.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4272.patch","merged_at":1651647515000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4271","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4271\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4271\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4271\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4271","id":1224404403,"node_id":"I_kwDODunzps5I-u2z","number":4271,"title":"A typo in docs of datasets.disable_progress_bar","user":{"login":"jiangwy99","id":39762734,"node_id":"MDQ6VXNlcjM5NzYyNzM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39762734?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jiangwy99","html_url":"https:\/\/github.com\/jiangwy99","followers_url":"https:\/\/api.github.com\/users\/jiangwy99\/followers","following_url":"https:\/\/api.github.com\/users\/jiangwy99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jiangwy99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jiangwy99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jiangwy99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jiangwy99\/orgs","repos_url":"https:\/\/api.github.com\/users\/jiangwy99\/repos","events_url":"https:\/\/api.github.com\/users\/jiangwy99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jiangwy99\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"assignees":[{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)"],"created_at":1651599896000,"updated_at":1651647515000,"closed_at":1651647515000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nin the docs of V2.1.0 datasets.disable_progress_bar, we should replace \"enable\" with \"disable\".","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4271\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4271\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4270","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4270\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4270\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4270\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4270","id":1224244460,"node_id":"PR_kwDODunzps43PC5V","number":4270,"title":"Fix style in openbookqa dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651591294000,"updated_at":1651826286000,"closed_at":1651594852000,"author_association":"MEMBER","active_lock_reason":null,"body":"CI in PR:\r\n- #4259 \r\n\r\nwas green, but after merging it to master, a code quality error appeared.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4270\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4270\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4270","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4270","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4270.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4270.patch","merged_at":1651594852000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4269","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4269\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4269\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4269\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4269","id":1223865145,"node_id":"PR_kwDODunzps43Nzwh","number":4269,"title":"Add license and point of contact to big_patent dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651569847000,"updated_at":1651826289000,"closed_at":1651576579000,"author_association":"MEMBER","active_lock_reason":null,"body":"Update metadata of big_patent dataset with:\r\n- license\r\n- point of contact","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4269\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4269\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4269","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4269","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4269.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4269.patch","merged_at":1651576579000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4268","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4268\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4268\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4268\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4268","id":1223331964,"node_id":"I_kwDODunzps5I6pB8","number":4268,"title":"error downloading bigscience-catalogue-lm-data\/lm_en_wiktionary_filtered","user":{"login":"i-am-neo","id":102043285,"node_id":"U_kgDOBhUOlQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/102043285?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/i-am-neo","html_url":"https:\/\/github.com\/i-am-neo","followers_url":"https:\/\/api.github.com\/users\/i-am-neo\/followers","following_url":"https:\/\/api.github.com\/users\/i-am-neo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/i-am-neo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/i-am-neo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/i-am-neo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/i-am-neo\/orgs","repos_url":"https:\/\/api.github.com\/users\/i-am-neo\/repos","events_url":"https:\/\/api.github.com\/users\/i-am-neo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/i-am-neo\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https:\/\/en.wiktionary.org\/wiki\/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https:\/\/en.wikipedia.org\/wiki\/Received_Pronunciation)) [IPA](https:\/\/en.wiktionary.org\/wiki\/Wiktionary:International_Phonetic_Alphabet)([key](https:\/\/en.wiktionary.org\/wiki\/Appendix:English_pronunciation)): \/w\u025c\u02d0d\/\r\n([General American](https:\/\/en.wikipedia.org\/wiki\/General_American)) [enPR](https:\/\/en.wiktionary.org\/wiki\/Appendix:English_pronunciation): w\u00fbrd, [IPA](https:\/\/en.wiktionary.org\/wiki\/Wiktionary:International_Phonetic_Alphabet)([key](https:\/\/en.wiktionary.org\/wiki\/Appendix:English_pronunciation)): \/w\u025dd\/","Hi @i-am-neo, thanks for reporting.\r\n\r\nNormally this dataset should be private and not accessible for public use. @cakiki, @lvwerra, any reason why is it public? I see many other Wikimedia datasets are also public.\r\n\r\nAlso note that last commit \"Add metadata\" (https:\/\/huggingface.co\/datasets\/bigscience-catalogue-lm-data\/lm_en_wiktionary_filtered\/commit\/dc2f458dab50e00f35c94efb3cd4009996858609) introduced buggy data files (`data\/file-01.jsonl.gz.lock`, `data\/file-01.jsonl.gz.lock.lock`). The same bug appears in other datasets as well.\r\n\r\n@i-am-neo, please note that in the near future we are planning to make public all datasets used for the BigScience project (at least all of them whose license allows to do that). Once public, they will be accessible for all the NLP community.","Ah this must be a bug introduced at creation time since the repos were created programmatically; I'll go ahead and make them private; sorry about that!","All datasets are private now. \r\n\r\nRe:that bug I think we're currently avoiding it by avoiding verifications. (i.e. `ignore_verifications=True`)","Thanks a lot, @cakiki.\r\n\r\n@i-am-neo, I'm closing this issue for now because the dataset is not publicly available yet. Just stay tuned, as we will soon release all the BigScience open-license datasets. ","Thanks for letting me know, @albertvillanova @cakiki.\r\nAny chance of having a subset alpha version in the meantime? \r\nI only need two dicts out of wiktionary: 1) phoneme(as key): word, and 2) word(as key): its phonemes.\r\n\r\nWould like to use it for a mini-poc [Robust ASR](https:\/\/github.com\/huggingface\/transformers\/issues\/13162#issuecomment-1096881290) decoding, cc @patrickvonplaten. \r\n\r\n(Patrick, possible to email you so as not to litter github with comments? I have some observations after experiments training hubert on some YT AMI-like data (11.44% wer). Also wonder if a robust ASR is on your\/HG's roadmap). Thanks!","Hey @i-am-neo,\r\n\r\nCool to hear that you're working on Robust ASR! Feel free to drop me a mail :-)","@i-am-neo This particular subset of the dataset was taken from the [CirrusSearch dumps](https:\/\/dumps.wikimedia.org\/other\/cirrussearch\/current\/)\r\nYou're specifically after the [enwiktionary-20220425-cirrussearch-content.json.gz](https:\/\/dumps.wikimedia.org\/other\/cirrussearch\/current\/enwiktionary-20220425-cirrussearch-content.json.gz) file","thanks @cakiki ! I could access the gz file yesterday (but neglected to tuck it away somewhere safe), and today the link is throwing a 404. Can you help? <\/del> Never mind, got it!","thanks @patrickvonplaten. will do - getting my observations together."],"created_at":1651523665000,"updated_at":1651852410000,"closed_at":1651577028000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nError generated when attempting to download dataset\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"bigscience-catalogue-lm-data\/lm_en_wiktionary_filtered\")\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\n## Actual results\r\n```\r\nExpectedMoreDownloadedFiles Traceback (most recent call last)\r\n\r\n[](https:\/\/localhost:8080\/#) in ()\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 dataset = load_dataset(\"bigscience-catalogue-lm-data\/lm_en_wiktionary_filtered\")\r\n\r\n3 frames\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/info_utils.py](https:\/\/localhost:8080\/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 31 return\r\n 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:\r\n---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:\r\n 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))\r\n\r\nExpectedMoreDownloadedFiles: {'\/home\/leandro\/catalogue_data\/datasets\/lm_en_wiktionary_filtered\/data\/file-01.jsonl.gz', '\/home\/leandro\/catalogue_data\/datasets\/lm_en_wiktionary_filtered\/data\/file-01.jsonl.gz.lock'}\r\n```\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.18.3\r\n- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4268\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4268\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4267","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4267\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4267\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4267\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4267","id":1223214275,"node_id":"PR_kwDODunzps43LzOR","number":4267,"title":"Replace data URL in SAMSum dataset within the same repository","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651516688000,"updated_at":1651826293000,"closed_at":1651518229000,"author_association":"MEMBER","active_lock_reason":null,"body":"Replace data URL with one in the same repository.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4267\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4267\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4267","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4267","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4267.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4267.patch","merged_at":1651518229000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4266","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4266\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4266\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4266\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4266","id":1223116436,"node_id":"PR_kwDODunzps43LeXK","number":4266,"title":"Add HF Speech Bench to Librispeech Dataset Card","user":{"login":"sanchit-gandhi","id":93869735,"node_id":"U_kgDOBZhWpw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/93869735?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sanchit-gandhi","html_url":"https:\/\/github.com\/sanchit-gandhi","followers_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sanchit-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651510771000,"updated_at":1651740440000,"closed_at":1651740009000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_7_0) and [8](https:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_8_0) through someone with permissions? \r\n\r\ncc @patrickvonplaten: more leaderboard promotion!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4266\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4266\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4266","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4266","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4266.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4266.patch","merged_at":1651740009000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4263","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4263\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4263\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4263\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4263","id":1222723083,"node_id":"PR_kwDODunzps43KLnD","number":4263,"title":"Rename imagenet2012 -> imagenet-1k","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Later we can add imagenet-21k as a new dataset if we want.\r\n\r\nisn't it what models refer to as `imagenet` already?","> isn't it what models refer to as imagenet already?\r\n\r\nI wasn't sure, but it looks like it indeed. Therefore having a dataset `imagenet` for ImageNet 21k makes sense actually.\r\n\r\nEDIT: actually not all `imagenet` tag refer to ImageNet 21k - we will need to correct some of them","_The documentation is not available anymore as the PR was closed or merged._","should we remove the repo mirror on the hub side or will you do it?"],"created_at":1651487181000,"updated_at":1651513846000,"closed_at":1651509177000,"author_association":"MEMBER","active_lock_reason":null,"body":"On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags.\r\n\r\nTo correctly link models to imagenet, we should rename this dataset `imagenet-1k`.\r\n\r\nLater we can add `imagenet-21k` as a new dataset if we want.\r\n\r\nOnce this one is merged we can delete the `imagenet2012` dataset repository on the Hub.\r\n\r\nEDIT: to complete the rationale on why we should name it `imagenet-1k`:\r\nIf users specifically added the tag `imagenet-1k` , then it could be for two reasons (not sure which one is predominant), either they\r\n- wanted to make it explicit that it\u2019s not 21k -> the distinction is important for the community\r\n- or they have been following this convention from other models -> the convention implicitly exists already","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4263\/reactions","total_count":4,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":1,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4263\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4263","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4263","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4263.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4263.patch","merged_at":1651509177000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4262","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4262\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4262\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4262\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4262","id":1222130749,"node_id":"PR_kwDODunzps43IOye","number":4262,"title":"Add YAML tags to Dataset Card rotten tomatoes","user":{"login":"mo6zes","id":10004251,"node_id":"MDQ6VXNlcjEwMDA0MjUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10004251?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mo6zes","html_url":"https:\/\/github.com\/mo6zes","followers_url":"https:\/\/api.github.com\/users\/mo6zes\/followers","following_url":"https:\/\/api.github.com\/users\/mo6zes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mo6zes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mo6zes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mo6zes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mo6zes\/orgs","repos_url":"https:\/\/api.github.com\/users\/mo6zes\/repos","events_url":"https:\/\/api.github.com\/users\/mo6zes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mo6zes\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651406348000,"updated_at":1651588053000,"closed_at":1651587635000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The dataset card for the rotten tomatoes \/ MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4262\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4262\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4262","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4262","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4262.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4262.patch","merged_at":1651587635000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4261","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4261\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4261\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4261\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4261","id":1221883779,"node_id":"I_kwDODunzps5I1HeD","number":4261,"title":"data leakage in `webis\/conclugen` dataset","user":{"login":"xflashxx","id":54585776,"node_id":"MDQ6VXNlcjU0NTg1Nzc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54585776?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xflashxx","html_url":"https:\/\/github.com\/xflashxx","followers_url":"https:\/\/api.github.com\/users\/xflashxx\/followers","following_url":"https:\/\/api.github.com\/users\/xflashxx\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xflashxx\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xflashxx\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xflashxx\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xflashxx\/orgs","repos_url":"https:\/\/api.github.com\/users\/xflashxx\/repos","events_url":"https:\/\/api.github.com\/users\/xflashxx\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xflashxx\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @xflashxx, thanks for reporting.\r\n\r\nPlease note that this dataset was generated and shared by Webis Group: https:\/\/huggingface.co\/webis\r\n\r\nWe are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply.","i'd suggest just pinging the authors here in the issue if possible?","Thanks for reporting this @xflashxx. I'll have a look and get back to you on this.","Hi @xflashxx and @albertvillanova,\r\n\r\nI have updated the files with de-duplicated splits. Apparently the debate portals from which part of the examples were sourced had unique timestamps for some examples (up to 6%; updated counts in the README) without any actual content updated that lead to \"new\" items. The length of `ids_validation` and `ids_testing` is zero.\r\n\r\nRegarding impact on scores:\r\n1. We employed automatic evaluation (on a separate set of 1000 examples) only to justify the exclusion of the smaller models for manual evaluation (due to budget constraints). I am confident the ranking still stands (unsurprisingly, the bigger models doing better than those trained on the smaller splits). We also highlight this in the paper. \r\n\r\n2. The examples used for manual evaluation have no overlap with any splits (also because they do not have any ground truth as we applied the trained models on an unlabeled sample to test its practical usage). I've added these two files to the dataset repository.\r\n\r\nHope this helps!","Thanks @shahbazsyed for your fast fix.\r\n\r\nAs a side note:\r\n- Your email appearing as Point of Contact in the dataset README has a typo: @uni.leipzig.de instead of @uni-leipzig.de\r\n- Your commits on the Hub are not linked to your profile on the Hub: this is because we use the email address to make this link; the email address used in your commit author and the email address set on your Hub account settings."],"created_at":1651340617000,"updated_at":1651557866000,"closed_at":1651557866000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nSome samples (argument-conclusion pairs) in the *training* split of the `webis\/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.\r\nFurthermore, all splits contain duplicate samples.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntraining = load_dataset(\"webis\/conclugen\", \"base\", split=\"train\")\r\nvalidation = load_dataset(\"webis\/conclugen\", \"base\", split=\"validation\")\r\ntesting = load_dataset(\"webis\/conclugen\", \"base\", split=\"test\")\r\n\r\n# collect which sample id's are present in the training split\r\nids_validation = list()\r\nids_testing = list()\r\n\r\nfor train_sample in training:\r\n train_argument = train_sample[\"argument\"]\r\n train_conclusion = train_sample[\"conclusion\"]\r\n train_id = train_sample[\"id\"]\r\n \r\n # test if current sample is in validation split\r\n if train_argument in validation[\"argument\"]:\r\n for validation_sample in validation:\r\n validation_argument = validation_sample[\"argument\"]\r\n validation_conclusion = validation_sample[\"conclusion\"]\r\n validation_id = validation_sample[\"id\"]\r\n if train_argument == validation_argument and train_conclusion == validation_conclusion:\r\n ids_validation.append(validation_id)\r\n \r\n # test if current sample is in test split\r\n if train_argument in testing[\"argument\"]:\r\n for testing_sample in testing:\r\n testing_argument = testing_sample[\"argument\"]\r\n testing_conclusion = testing_sample[\"conclusion\"]\r\n testing_id = testing_sample[\"id\"]\r\n if train_argument == testing_argument and train_conclusion == testing_conclusion:\r\n ids_testing.append(testing_id)\r\n```\r\n\r\n## Expected results\r\nLength of both lists `ids_validation` and `ids_testing` should be zero.\r\n\r\n## Actual results\r\nLength of `ids_validation` = `2556`\r\nLength of `ids_testing` = `287`\r\n\r\nFurthermore, there seems to be duplicate samples in (at least) the *training* split, since:\r\n`print(len(set(ids_validation)))` = `950`\r\n`print(len(set(ids_testing)))` = `101`\r\n\r\nAll in all, around 7% of the samples of each the *validation* and *test* split seems to be present in the *training* split.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.18.4\r\n- Platform: macOS-12.3.1-arm64-arm-64bit\r\n- Python version: 3.9.10\r\n- PyArrow version: 7.0.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4261\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4261\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4260","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4260\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4260\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4260\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4260","id":1221830292,"node_id":"PR_kwDODunzps43HSfs","number":4260,"title":"Add mr_polarity movie review sentiment classification","user":{"login":"mo6zes","id":10004251,"node_id":"MDQ6VXNlcjEwMDA0MjUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10004251?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mo6zes","html_url":"https:\/\/github.com\/mo6zes","followers_url":"https:\/\/api.github.com\/users\/mo6zes\/followers","following_url":"https:\/\/api.github.com\/users\/mo6zes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mo6zes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mo6zes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mo6zes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mo6zes\/orgs","repos_url":"https:\/\/api.github.com\/users\/mo6zes\/repos","events_url":"https:\/\/api.github.com\/users\/mo6zes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mo6zes\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["whoops just found https:\/\/huggingface.co\/datasets\/rotten_tomatoes"],"created_at":1651324773000,"updated_at":1651328185000,"closed_at":1651328185000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add the MR (Movie Review) dataset. The original dataset contains sentences from Rotten Tomatoes labeled as either \"positive\" or \"negative\". \r\n\r\nHomepage: [https:\/\/www.cs.cornell.edu\/people\/pabo\/movie-review-data\/](https:\/\/www.cs.cornell.edu\/people\/pabo\/movie-review-data\/)\r\npaperswithcode: [https:\/\/paperswithcode.com\/dataset\/mr](https:\/\/paperswithcode.com\/dataset\/mr)\r\n\r\n- [ ] I was not able to generate dummy data, the original dataset files have \".pos\" and \".neg\" as file extensions so the auto-generator does not work. Is it fine like this or should dummy data be added?\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4260\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4260\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4260","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4260","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4260.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4260.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4259","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4259\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4259\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4259\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4259","id":1221768025,"node_id":"PR_kwDODunzps43HHGc","number":4259,"title":"Fix bug in choices labels in openbookqa dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651304499000,"updated_at":1651645891000,"closed_at":1651590861000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550.\r\n\r\nFix #3550.\r\n\r\ncc. @lhoestq @mariosasko ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4259\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4259\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4259","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4259","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4259.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4259.patch","merged_at":1651590861000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4258","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4258\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4258\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4258\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4258","id":1221637727,"node_id":"PR_kwDODunzps43Gstg","number":4258,"title":"Fix\/start token mask issue and update documentation","user":{"login":"TristanThrush","id":20826878,"node_id":"MDQ6VXNlcjIwODI2ODc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20826878?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TristanThrush","html_url":"https:\/\/github.com\/TristanThrush","followers_url":"https:\/\/api.github.com\/users\/TristanThrush\/followers","following_url":"https:\/\/api.github.com\/users\/TristanThrush\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TristanThrush\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TristanThrush\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TristanThrush\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TristanThrush\/orgs","repos_url":"https:\/\/api.github.com\/users\/TristanThrush\/repos","events_url":"https:\/\/api.github.com\/users\/TristanThrush\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TristanThrush\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","> Good catch ! Thanks :)\r\n> \r\n> Next time can you describe your fix in the Pull Request description please ?\r\n\r\nThanks. Also whoops, sorry about not being very descriptive. I updated the pull request description, and will keep this in mind for future PRs."],"created_at":1651272164000,"updated_at":1651509200000,"closed_at":1651508772000,"author_association":"MEMBER","active_lock_reason":null,"body":"This pr fixes a couple bugs:\r\n\r\n1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct\r\n2) the documentation was not updated","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4258\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4258\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4258","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4258","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4258.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4258.patch","merged_at":1651508772000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4257","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4257\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4257\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4257\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4257","id":1221393137,"node_id":"PR_kwDODunzps43GATC","number":4257,"title":"Create metric card for Mahalanobis Distance","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651257447000,"updated_at":1651503018000,"closed_at":1651502604000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile:","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4257\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4257\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4257","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4257","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4257.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4257.patch","merged_at":1651502604000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4256","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4256\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4256\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4256\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4256","id":1221379625,"node_id":"PR_kwDODunzps43F9Zw","number":4256,"title":"Create metric card for MSE","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651256482000,"updated_at":1651503342000,"closed_at":1651502927000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Proposing a metric card for Mean Squared Error","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4256\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4256\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4256","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4256","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4256.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4256.patch","merged_at":1651502927000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4255","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4255\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4255\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4255\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4255","id":1221142899,"node_id":"PR_kwDODunzps43FHgR","number":4255,"title":"No google drive URL for pubmed_qa","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","CI is failing because some sections are missing in the dataset card, but this is unrelated to this PR - Merging !"],"created_at":1651247746000,"updated_at":1651249495000,"closed_at":1651249136000,"author_association":"MEMBER","active_lock_reason":null,"body":"I hosted the data files in https:\/\/huggingface.co\/datasets\/pubmed_qa. This is allowed because the data is under the MIT license.\r\n\r\ncc @stas00 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4255\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4255\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4255","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4255","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4255.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4255.patch","merged_at":1651249136000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4254","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4254\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4254\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4254\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4254","id":1220204395,"node_id":"PR_kwDODunzps43Bwnj","number":4254,"title":"Replace data URL in SAMSum dataset and support streaming","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651220503000,"updated_at":1651826296000,"closed_at":1651249569000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR replaces data URL in SAMSum dataset:\r\n- original host (arxiv.org) does not allow HTTP Range requests\r\n- we have hosted the data on the Hub (license: CC BY-NC-ND 4.0)\r\n\r\nMoreover, it implements support for streaming.\r\n\r\nFix #4146.\r\nRelated to: #4236.\r\n\r\nCC: @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4254\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4254\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4254","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4254","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4254.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4254.patch","merged_at":1651249568000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4253","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4253\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4253\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4253\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4253","id":1219286408,"node_id":"PR_kwDODunzps42-c8Q","number":4253,"title":"Create metric cards for mean IOU","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651179507000,"updated_at":1651254287000,"closed_at":1651253886000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Proposing a metric card for mIoU :rocket:\r\n\r\nsorry for spamming you with review requests, @albertvillanova ! :hugs: ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4253\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4253\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4253","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4253","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4253.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4253.patch","merged_at":1651253886000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4252","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4252\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4252\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4252\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4252","id":1219151100,"node_id":"PR_kwDODunzps429--I","number":4252,"title":"Creating metric card for MAE","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651172673000,"updated_at":1651251551000,"closed_at":1651251150000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Initial proposal for MAE metric card","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4252\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4252\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4252","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4252","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4252.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4252.patch","merged_at":1651251150000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4251","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4251\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4251\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4251\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4251","id":1219116354,"node_id":"PR_kwDODunzps4293dB","number":4251,"title":"Metric card for the XTREME-S dataset","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651170739000,"updated_at":1651250771000,"closed_at":1651250326000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Proposing a metric card for the XTREME-S dataset :hugs:","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4251\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4251\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4251","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4251","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4251.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4251.patch","merged_at":1651250326000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4250","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4250\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4250\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4250\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4250","id":1219093830,"node_id":"PR_kwDODunzps429yjN","number":4250,"title":"Bump PyArrow Version to 6","user":{"login":"dnaveenr","id":17746528,"node_id":"MDQ6VXNlcjE3NzQ2NTI4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17746528?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dnaveenr","html_url":"https:\/\/github.com\/dnaveenr","followers_url":"https:\/\/api.github.com\/users\/dnaveenr\/followers","following_url":"https:\/\/api.github.com\/users\/dnaveenr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dnaveenr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dnaveenr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dnaveenr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dnaveenr\/orgs","repos_url":"https:\/\/api.github.com\/users\/dnaveenr\/repos","events_url":"https:\/\/api.github.com\/users\/dnaveenr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dnaveenr\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Updated meta.yaml as well. Thanks.","I'm OK with bumping PyArrow to version 6 to match the version in Colab, but maybe a better solution would be to stop using extension types in our codebase to avoid similar issues.","> but maybe a better solution would be to stop using extension types in our codebase to avoid similar issues.\r\n\r\nI agree, not much attention has been payed to extension arrays in the latest developments of Arrow anyway.\r\n\r\nLet's not use them more that what we do right now, and try to remove them at one point"],"created_at":1651169450000,"updated_at":1651657012000,"closed_at":1651656586000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fixes #4152 \r\n\r\nThis PR updates the PyArrow version to 6 in setup.py, CI job files .circleci\/config.yaml and .github\/workflows\/benchmarks.yaml files.\r\nThis will fix ArrayND error which exists in pyarrow 5.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4250\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4250\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4250","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4250","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4250.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4250.patch","merged_at":1651656586000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4249","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4249\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4249\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4249\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4249","id":1218524424,"node_id":"PR_kwDODunzps42742y","number":4249,"title":"Support streaming XGLUE dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651141643000,"updated_at":1651826301000,"closed_at":1651162083000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming XGLUE dataset.\r\n\r\nFix #4247.\r\n\r\nCC: @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4249\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4249\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4249","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4249","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4249.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4249.patch","merged_at":1651162083000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4248","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4248\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4248\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4248\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4248","id":1218460444,"node_id":"I_kwDODunzps5IoDsc","number":4248,"title":"conll2003 dataset loads original data.","user":{"login":"sue991","id":26458611,"node_id":"MDQ6VXNlcjI2NDU4NjEx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26458611?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sue991","html_url":"https:\/\/github.com\/sue991","followers_url":"https:\/\/api.github.com\/users\/sue991\/followers","following_url":"https:\/\/api.github.com\/users\/sue991\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sue991\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sue991\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sue991\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sue991\/orgs","repos_url":"https:\/\/api.github.com\/users\/sue991\/repos","events_url":"https:\/\/api.github.com\/users\/sue991\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sue991\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @sue99.\r\n\r\nUnfortunately. I'm not able to reproduce your problem:\r\n```python\r\nIn [1]: import datasets\r\n ...: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"conll2003\")\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 14042\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3251\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3454\r\n })\r\n})\r\n\r\nIn [3]: dataset[\"train\"][0]\r\nOut[3]: \r\n{'id': '0',\r\n 'tokens': ['EU',\r\n 'rejects',\r\n 'German',\r\n 'call',\r\n 'to',\r\n 'boycott',\r\n 'British',\r\n 'lamb',\r\n '.'],\r\n 'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],\r\n 'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],\r\n 'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0]}\r\n```\r\n\r\nJust guessing: might be the case that you are calling `load_dataset` from a working directory that contains a local folder named `conll2003` (containing the raw data files)? If that is the case, `datasets` library gives precedence to the local folder over the dataset on the Hub. "],"created_at":1651138411000,"updated_at":1658128548000,"closed_at":1658128548000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI load `conll2003` dataset to use refined data like [this](https:\/\/huggingface.co\/datasets\/conll2003\/viewer\/conll2003\/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text.\r\n\r\nIs this a bug or should I use another dataset_name like `lhoestq\/conll2003` ?\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"conll2003\")\r\n```\r\n\r\n## Expected results\r\n{\r\n \"chunk_tags\": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],\r\n \"id\": \"0\",\r\n \"ner_tags\": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\r\n \"pos_tags\": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],\r\n \"tokens\": [\"The\", \"European\", \"Commission\", \"said\", \"on\", \"Thursday\", \"it\", \"disagreed\", \"with\", \"German\", \"advice\", \"to\", \"consumers\", \"to\", \"shun\", \"British\", \"lamb\", \"until\", \"scientists\", \"determine\", \"whether\", \"mad\", \"cow\", \"disease\", \"can\", \"be\", \"transmitted\", \"to\", \"sheep\", \".\"]\r\n}\r\n\r\n## Actual results\r\n```python\r\nprint(dataset)\r\n\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 219554\r\n })\r\n test: Dataset({\r\n features: ['text'],\r\n num_rows: 50350\r\n })\r\n validation: Dataset({\r\n features: ['text'],\r\n num_rows: 55044\r\n })\r\n})\r\n```\r\n\r\n```python\r\nfor i in range(20):\r\n print(dataset['train'][i])\r\n\r\n{'text': '-DOCSTART- -X- -X- O'}\r\n{'text': ''}\r\n{'text': 'EU NNP B-NP B-ORG'}\r\n{'text': 'rejects VBZ B-VP O'}\r\n{'text': 'German JJ B-NP B-MISC'}\r\n{'text': 'call NN I-NP O'}\r\n{'text': 'to TO B-VP O'}\r\n{'text': 'boycott VB I-VP O'}\r\n{'text': 'British JJ B-NP B-MISC'}\r\n{'text': 'lamb NN I-NP O'}\r\n{'text': '. . O O'}\r\n{'text': ''}\r\n{'text': 'Peter NNP B-NP B-PER'}\r\n{'text': 'Blackburn NNP I-NP I-PER'}\r\n{'text': ''}\r\n{'text': 'BRUSSELS NNP B-NP B-LOC'}\r\n{'text': '1996-08-22 CD I-NP O'}\r\n{'text': ''}\r\n{'text': 'The DT B-NP O'}\r\n{'text': 'European NNP I-NP B-ORG'}\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4248\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4248\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4247","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4247\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4247\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4247\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4247","id":1218320882,"node_id":"I_kwDODunzps5Inhny","number":4247,"title":"The data preview of XGLUE","user":{"login":"czq1999","id":49108847,"node_id":"MDQ6VXNlcjQ5MTA4ODQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49108847?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/czq1999","html_url":"https:\/\/github.com\/czq1999","followers_url":"https:\/\/api.github.com\/users\/czq1999\/followers","following_url":"https:\/\/api.github.com\/users\/czq1999\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/czq1999\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/czq1999\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/czq1999\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/czq1999\/orgs","repos_url":"https:\/\/api.github.com\/users\/czq1999\/repos","events_url":"https:\/\/api.github.com\/users\/czq1999\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/czq1999\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["![image](https:\/\/user-images.githubusercontent.com\/49108847\/165700611-915b4343-766f-4b81-bdaa-b31950250f06.png)\r\n","Thanks for reporting @czq1999.\r\n\r\nNote that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.\r\n\r\nThat is the case for XGLUE dataset (as the error message points out): this must be refactored to support streaming. ","Fixed, thanks @albertvillanova !\r\n\r\nhttps:\/\/huggingface.co\/datasets\/xglue\r\n\r\n\"Capture\r\n"],"created_at":1651131050000,"updated_at":1651220608000,"closed_at":1651162083000,"author_association":"NONE","active_lock_reason":null,"body":"It seems that something wrong with the data previvew of XGLUE","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4247\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4247\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4246","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4246\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4246\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4246\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4246","id":1218320293,"node_id":"PR_kwDODunzps427NiD","number":4246,"title":"Support to load dataset with TSV files by passing only dataset name","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651131015000,"updated_at":1651826308000,"closed_at":1651824847000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR implements support to load a dataset (w\/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\\t'`):\r\n```python\r\nds = load_dataset(\"dataset\/name\")\r\n```\r\n\r\nThe refactoring allows for future builder kwargs customizations based on file extension.\r\n\r\nRelated to #4238.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4246\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4246\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4246","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4246","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4246.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4246.patch","merged_at":1651824847000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4245","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4245\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4245\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4245\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4245","id":1217959400,"node_id":"PR_kwDODunzps426AUR","number":4245,"title":"Add code examples for DatasetDict","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651099942000,"updated_at":1651256374000,"closed_at":1651255983000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds code examples for `DatasetDict` in the API reference :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4245\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4245\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4245","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4245","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4245.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4245.patch","merged_at":1651255983000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4244","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4244\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4244\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4244\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4244","id":1217732221,"node_id":"PR_kwDODunzps425Po6","number":4244,"title":"task id update","user":{"login":"nazneenrajani","id":3278583,"node_id":"MDQ6VXNlcjMyNzg1ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3278583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nazneenrajani","html_url":"https:\/\/github.com\/nazneenrajani","followers_url":"https:\/\/api.github.com\/users\/nazneenrajani\/followers","following_url":"https:\/\/api.github.com\/users\/nazneenrajani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nazneenrajani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nazneenrajani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nazneenrajani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nazneenrajani\/orgs","repos_url":"https:\/\/api.github.com\/users\/nazneenrajani\/repos","events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Reverted the multi-input-text-classification tag from task_categories and added it as task_ids @lhoestq ","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651084094000,"updated_at":1651661033000,"closed_at":1651660597000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"changed multi input text classification as task id instead of category","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4244\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4244\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4244","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4244","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4244.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4244.patch","merged_at":1651660597000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4243","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4243\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4243\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4243\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4243","id":1217689909,"node_id":"PR_kwDODunzps425Gkn","number":4243,"title":"WIP: Initial shades loading script and readme","user":{"login":"shayne-longpre","id":69018523,"node_id":"MDQ6VXNlcjY5MDE4NTIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/69018523?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shayne-longpre","html_url":"https:\/\/github.com\/shayne-longpre","followers_url":"https:\/\/api.github.com\/users\/shayne-longpre\/followers","following_url":"https:\/\/api.github.com\/users\/shayne-longpre\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shayne-longpre\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shayne-longpre\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shayne-longpre\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shayne-longpre\/orgs","repos_url":"https:\/\/api.github.com\/users\/shayne-longpre\/repos","events_url":"https:\/\/api.github.com\/users\/shayne-longpre\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shayne-longpre\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1651081543000,"updated_at":1657120792000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4243\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4243\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4243","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4243","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4243.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4243.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4242","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4242\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4242\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4242\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4242","id":1217665960,"node_id":"PR_kwDODunzps425BYf","number":4242,"title":"Update auth when mirroring datasets on the hub","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651080151000,"updated_at":1651081024000,"closed_at":1651080642000,"author_association":"MEMBER","active_lock_reason":null,"body":"We don't need to use extraHeaders anymore for rate limits anymore. Anyway extraHeaders was not working with git LFS because it was passing the wrong auth to S3.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4242\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4242\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4242","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4242","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4242.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4242.patch","merged_at":1651080642000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4241","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4241\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4241\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4241\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4241","id":1217423686,"node_id":"I_kwDODunzps5IkGlG","number":4241,"title":"NonMatchingChecksumError when attempting to download GLUE","user":{"login":"drussellmrichie","id":9650729,"node_id":"MDQ6VXNlcjk2NTA3Mjk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9650729?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/drussellmrichie","html_url":"https:\/\/github.com\/drussellmrichie","followers_url":"https:\/\/api.github.com\/users\/drussellmrichie\/followers","following_url":"https:\/\/api.github.com\/users\/drussellmrichie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/drussellmrichie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/drussellmrichie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/drussellmrichie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/drussellmrichie\/orgs","repos_url":"https:\/\/api.github.com\/users\/drussellmrichie\/repos","events_url":"https:\/\/api.github.com\/users\/drussellmrichie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/drussellmrichie\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi :)\r\n\r\nI think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:\r\n\r\n```py\r\npip install -U datasets\r\n```\r\n\r\nThen you can download:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"glue\", \"rte\")\r\n```","This appears to work. Thank you!\n\nOn Wed, Apr 27, 2022, 1:18 PM Steven Liu ***@***.***> wrote:\n\n> Hi :)\n>\n> I think your issue may be related to the older nlp library. I was able to\n> download glue with the latest version of datasets. Can you try updating\n> with:\n>\n> pip install -U datasets\n>\n> Then you can download:\n>\n> from datasets import load_datasetds = load_dataset(\"glue\", \"rte\")\n>\n> \u2014\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n"],"created_at":1651068861000,"updated_at":1651131927000,"closed_at":1651131927000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI am trying to download the GLUE dataset from the NLP module but get an error (see below).\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport nlp\r\nnlp.__version__ # '0.2.0'\r\nnlp.load_dataset('glue', name=\"rte\", download_mode=\"force_redownload\")\r\n```\r\n\r\n## Expected results\r\nI expect the dataset to download without an error.\r\n\r\n## Actual results\r\n```\r\nINFO:nlp.load:Checking \/home\/richier\/.cache\/huggingface\/datasets\/5fe6ab0df8a32a3371b2e6a969d31d855a19563724fb0d0f163748c270c0ac60.2ea96febf19981fae5f13f0a43d4e2aa58bc619bc23acf06de66675f425a5538.py for additional imports.\r\nINFO:nlp.load:Found main folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/glue\/glue.py at \/home\/richier\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/datasets\/glue\r\nINFO:nlp.load:Found specific version folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/glue\/glue.py at \/home\/richier\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/datasets\/glue\/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4\r\nINFO:nlp.load:Found script file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/glue\/glue.py to \/home\/richier\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/datasets\/glue\/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4\/glue.py\r\nINFO:nlp.load:Found dataset infos file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/glue\/dataset_infos.json to \/home\/richier\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/datasets\/glue\/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4\/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/glue\/glue.py at \/home\/richier\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/datasets\/glue\/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4\/glue.json\r\nINFO:nlp.info:Loading Dataset Infos from \/home\/richier\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/datasets\/glue\/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4\r\nINFO:nlp.builder:Generating dataset glue (\/home\/richier\/.cache\/huggingface\/datasets\/glue\/rte\/1.0.0)\r\nINFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\nINFO:nlp.utils.file_utils:Couldn't get ETag version for url https:\/\/firebasestorage.googleapis.com\/v0\/b\/mtl-sentence-representations.appspot.com\/o\/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb\r\nINFO:nlp.utils.file_utils:https:\/\/firebasestorage.googleapis.com\/v0\/b\/mtl-sentence-representations.appspot.com\/o\/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb not found in cache or force_download set to True, downloading to \/home\/richier\/.cache\/huggingface\/datasets\/downloads\/tmpldt3n805\r\nDownloading and preparing dataset glue\/rte (download: 680.81 KiB, generated: 1.83 MiB, total: 2.49 MiB) to \/home\/richier\/.cache\/huggingface\/datasets\/glue\/rte\/1.0.0...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 73.0\/73.0 [00:00<00:00, 73.9kB\/s]\r\nINFO:nlp.utils.file_utils:storing https:\/\/firebasestorage.googleapis.com\/v0\/b\/mtl-sentence-representations.appspot.com\/o\/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb in cache at \/home\/richier\/.cache\/huggingface\/datasets\/downloads\/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64\r\nINFO:nlp.utils.file_utils:creating metadata file for \/home\/richier\/.cache\/huggingface\/datasets\/downloads\/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64\r\n---------------------------------------------------------------------------\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n in \r\n----> 1 nlp.load_dataset('glue', name=\"rte\", download_mode=\"force_redownload\")\r\n\r\n~\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 518 download_mode=download_mode,\r\n 519 ignore_verifications=ignore_verifications,\r\n--> 520 save_infos=save_infos,\r\n 521 )\r\n 522 \r\n\r\n~\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 418 verify_infos = not save_infos and not ignore_verifications\r\n 419 self._download_and_prepare(\r\n--> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 421 )\r\n 422 # Sync info\r\n\r\n~\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 458 # Checksums verification\r\n 459 if verify_infos:\r\n--> 460 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())\r\n 461 for split_generator in split_generators:\r\n 462 if str(split_generator.split_info.name).lower() == \"all\":\r\n\r\n~\/anaconda3\/envs\/py36_bert_ee_torch1_11\/lib\/python3.6\/site-packages\/nlp\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums)\r\n 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]]\r\n 35 if len(bad_urls) > 0:\r\n---> 36 raise NonMatchingChecksumError(str(bad_urls))\r\n 37 logger.info(\"All the checksums matched successfully.\")\r\n 38 \r\n\r\nNonMatchingChecksumError: ['https:\/\/firebasestorage.googleapis.com\/v0\/b\/mtl-sentence-representations.appspot.com\/o\/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb']\r\n```\r\n## Environment info\r\n\r\n- `datasets` version: 2.0.0\r\n- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-redhat-8.5-Ootpa\r\n- Python version: 3.6.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.1.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4241\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4241\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4240","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4240\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4240\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4240\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4240","id":1217287594,"node_id":"PR_kwDODunzps423xRl","number":4240,"title":"Fix yield for crd3","user":{"login":"shanyas10","id":21066979,"node_id":"MDQ6VXNlcjIxMDY2OTc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21066979?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shanyas10","html_url":"https:\/\/github.com\/shanyas10","followers_url":"https:\/\/api.github.com\/users\/shanyas10\/followers","following_url":"https:\/\/api.github.com\/users\/shanyas10\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shanyas10\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shanyas10\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shanyas10\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shanyas10\/orgs","repos_url":"https:\/\/api.github.com\/users\/shanyas10\/repos","events_url":"https:\/\/api.github.com\/users\/shanyas10\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shanyas10\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I don't think you need to generate new dummy data, since they're in the same format as the original data.\r\n\r\nThe CI is failing because of this error:\r\n```python\r\n> turn[\"names\"] = turn[\"NAMES\"]\r\nE TypeError: tuple indices must be integers or slices, not str\r\n```\r\n\r\nDo you know what could cause this ? If I understand correctly, `turn` is supposed to be a list of dictionaries right ?","> ``` \r\n> \r\n> Do you know what could cause this ? If I understand correctly, turn is supposed to be a list of dictionaries right ?\r\n> ```\r\n\r\nThis is strange. Let me look into this. As per https:\/\/github.com\/RevanthRameshkumar\/CRD3\/blob\/master\/data\/aligned%20data\/c%3D2\/C1E001_2_0.json TURNS is a list of dictionaries. So when we iterate over `row[\"TURNS]\"` each `turn` is essentially a dictionary. Not sure why it's being considered a tuple here."],"created_at":1651062696000,"updated_at":1651236101000,"closed_at":1651236101000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example\r\nModified the features accordingly\r\n\r\n```\r\n\"turns\": [\r\n {\r\n \"names\": datasets.features.Sequence(datasets.Value(\"string\")),\r\n \"utterances\": datasets.features.Sequence(datasets.Value(\"string\")),\r\n \"number\": datasets.Value(\"int32\"),\r\n }\r\n ],\r\n }\r\n```\r\n\r\nI wasn't able to run `datasets-cli dummy_data datasets` command. Is there a workaround for this? ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4240\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4240\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4240","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4240","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4240.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4240.patch","merged_at":1651236101000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4239","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4239\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4239\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4239\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4239","id":1217269689,"node_id":"PR_kwDODunzps423tZr","number":4239,"title":"Small fixes in ROC AUC docs","user":{"login":"wschella","id":9478856,"node_id":"MDQ6VXNlcjk0Nzg4NTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9478856?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wschella","html_url":"https:\/\/github.com\/wschella","followers_url":"https:\/\/api.github.com\/users\/wschella\/followers","following_url":"https:\/\/api.github.com\/users\/wschella\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wschella\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wschella\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wschella\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wschella\/orgs","repos_url":"https:\/\/api.github.com\/users\/wschella\/repos","events_url":"https:\/\/api.github.com\/users\/wschella\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wschella\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651061750000,"updated_at":1651498137000,"closed_at":1651497723000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The list of use cases did not render on GitHub with the prepended spacing.\r\nAdditionally, some typo's we're fixed.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4239\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4239\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4239","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4239","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4239.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4239.patch","merged_at":1651497723000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4238","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4238\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4238\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4238\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4238","id":1217168123,"node_id":"I_kwDODunzps5IjIL7","number":4238,"title":"Dataset caching policy","user":{"login":"loretoparisi","id":163333,"node_id":"MDQ6VXNlcjE2MzMzMw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/163333?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/loretoparisi","html_url":"https:\/\/github.com\/loretoparisi","followers_url":"https:\/\/api.github.com\/users\/loretoparisi\/followers","following_url":"https:\/\/api.github.com\/users\/loretoparisi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/loretoparisi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/loretoparisi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/loretoparisi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/loretoparisi\/orgs","repos_url":"https:\/\/api.github.com\/users\/loretoparisi\/repos","events_url":"https:\/\/api.github.com\/users\/loretoparisi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/loretoparisi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that your dataset loads without any problem for me:\r\n```python\r\nIn [2]: ds = load_dataset(\"loretoparisi\/tatoeba-sentences\", data_files={\"train\": \"train.csv\", \"test\": \"test.csv\"}, delimiter=\"\\t\", column_names=['label', 'text'])\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 8256449\r\n })\r\n test: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 2061204\r\n })\r\n})\r\n``` ","@albertvillanova thank you, it seems it still does not work using:\r\n\r\n```python\r\nsentences = load_dataset(\r\n \"loretoparisi\/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n download_mode=\"force_redownload\"\r\n)\r\n```\r\n[This](https:\/\/colab.research.google.com\/drive\/1EA6FWo5pHxU8rPHHRn24NlHqRPiOlPTr?usp=sharing) is my notebook!\r\n\r\nThe problem is that the download file's revision for `test.csv` is not correctly parsed\r\n\r\n![Schermata 2022-04-27 alle 18 09 41](https:\/\/user-images.githubusercontent.com\/163333\/165563507-0be53eb6-8f61-49b0-b959-306e59281de3.png)\r\n\r\nIf you download that file `test.csv` from the repo, the line `\\\\N` is not there anymore (it was there at the first file upload).\r\n\r\nMy impression is that the Apache Arrow file is still cached - so server side, despite of enabling a forced download. For what I can see I get those two arrow files, but I cannot grep the bad line (`\\\\N`) since are binary files:\r\n\r\n```\r\n!ls -l \/root\/.cache\/huggingface\/datasets\/csv\/loretoparisi--tatoeba-sentences-efeff8965c730a2c\/0.0.0\/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\r\n!ls -l \/root\/.cache\/huggingface\/datasets\/csv\/loretoparisi--tatoeba-sentences-efeff8965c730a2c\/0.0.0\/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\/csv-test.arrow\r\n!head \/root\/.cache\/huggingface\/datasets\/csv\/loretoparisi--tatoeba-sentences-efeff8965c730a2c\/0.0.0\/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\/dataset_info.json\r\n```\r\n","SOLVED! The problem was the with the file itself, using caching parameter helped indeed.\r\nThanks for helping!"],"created_at":1651056131000,"updated_at":1651076965000,"closed_at":1651076930000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nI cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https:\/\/huggingface.co\/datasets\/loretoparisi\/tatoeba-sentences). The original file had a line with bad characters, causing the following error\r\n\r\n```\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/features\/features.py](https:\/\/localhost:8080\/#) in str2int(self, values)\r\n 852 if value not in self._str2int:\r\n 853 value = str(value).strip()\r\n--> 854 output.append(self._str2int[str(value)])\r\n 855 else:\r\n 856 # No names provided, try to integerize\r\n\r\nKeyError: '\\\\N'\r\n```\r\n\r\nThe file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally:\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\ndataset_builder = load_dataset_builder(\"loretoparisi\/tatoeba-sentences\")\r\nprint(dataset_builder.cache_dir)\r\nprint(dataset_builder.info.features)\r\nprint(dataset_builder.info.splits)\r\n```\r\n\r\n```\r\nUsing custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd\r\n\/root\/.cache\/huggingface\/datasets\/csv\/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd\/0.0.0\/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\r\nNone\r\nNone\r\n```\r\n\r\nand removing files located at `\/root\/.cache\/huggingface\/datasets\/csv\/loretoparisi--tatoeba-sentences-*`.\r\n Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it? \r\nCurrently it seems that the file `test.csv` on the repo [here](https:\/\/huggingface.co\/datasets\/loretoparisi\/tatoeba-sentences\/blob\/main\/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last.\r\n\r\nThank you.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\n\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\",\"vie\",\"nld\",\"epo\",\"por\",\"tur\",\"heb\",\"hun\",\"ell\",\"ind\",\"ara\",\"arz\",\"fin\",\"bul\",\"yue\",\"swe\",\"ukr\",\"bel\",\"que\",\"ces\",\"swh\",\"nno\",\"wuu\",\"nob\",\"zsm\",\"est\",\"kat\",\"pol\",\"lat\",\"urd\",\"sqi\",\"isl\",\"fry\",\"afr\",\"ron\",\"fao\",\"san\",\"bre\",\"tat\",\"yid\",\"uig\",\"uzb\",\"srp\",\"qya\",\"dan\",\"pes\",\"slk\",\"eus\",\"cycl\",\"acm\",\"tgl\",\"lvs\",\"kaz\",\"hye\",\"hin\",\"lit\",\"ben\",\"cat\",\"bos\",\"hrv\",\"tha\",\"orv\",\"cha\",\"mon\",\"lzh\",\"scn\",\"gle\",\"mkd\",\"slv\",\"frm\",\"glg\",\"vol\",\"ain\",\"jbo\",\"tok\",\"ina\",\"nds\",\"mal\",\"tlh\",\"roh\",\"ltz\",\"oss\",\"ido\",\"gla\",\"mlt\",\"sco\",\"ast\",\"jav\",\"oci\",\"ile\",\"ota\",\"xal\",\"tel\",\"sjn\",\"nov\",\"khm\",\"tpi\",\"ang\",\"aze\",\"tgk\",\"tuk\",\"chv\",\"hsb\",\"dsb\",\"bod\",\"sme\",\"cym\",\"mri\",\"ksh\",\"kmr\",\"ewe\",\"kab\",\"ber\",\"tpw\",\"udm\",\"lld\",\"pms\",\"lad\",\"grn\",\"mlg\",\"xho\",\"pnb\",\"grc\",\"hat\",\"lao\",\"npi\",\"cor\",\"nah\",\"avk\",\"mar\",\"guj\",\"pan\",\"kir\",\"myv\",\"prg\",\"sux\",\"crs\",\"ckt\",\"bak\",\"zlm\",\"hil\",\"cbk\",\"chr\",\"nav\",\"lkt\",\"enm\",\"arq\",\"lin\",\"abk\",\"pcd\",\"rom\",\"gsw\",\"tam\",\"zul\",\"awa\",\"wln\",\"amh\",\"bar\",\"hbo\",\"mhr\",\"bho\",\"mrj\",\"ckb\",\"osx\",\"pfl\",\"mgm\",\"sna\",\"mah\",\"hau\",\"kan\",\"nog\",\"sin\",\"glv\",\"dng\",\"kal\",\"liv\",\"vro\",\"apc\",\"jdt\",\"fur\",\"che\",\"haw\",\"yor\",\"crh\",\"pdc\",\"ppl\",\"kin\",\"shs\",\"mnw\",\"tet\",\"sah\",\"kum\",\"ngt\",\"nya\",\"pus\",\"hif\",\"mya\",\"moh\",\"wol\",\"tir\",\"ton\",\"lzz\",\"oar\",\"lug\",\"brx\",\"non\",\"mww\",\"hak\",\"nlv\",\"ngu\",\"bua\",\"aym\",\"vec\",\"ibo\",\"tkl\",\"bam\",\"kha\",\"ceb\",\"lou\",\"fuc\",\"smo\",\"gag\",\"lfn\",\"arg\",\"umb\",\"tyv\",\"kjh\",\"oji\",\"cyo\",\"urh\",\"kzj\",\"pam\",\"srd\",\"lmo\",\"swg\",\"mdf\",\"gil\",\"snd\",\"tso\",\"sot\",\"zza\",\"tsn\",\"pau\",\"som\",\"egl\",\"ady\",\"asm\",\"ori\",\"dtp\",\"cho\",\"max\",\"kam\",\"niu\",\"sag\",\"ilo\",\"kaa\",\"fuv\",\"nch\",\"hoc\",\"iba\",\"gbm\",\"sun\",\"war\",\"mvv\",\"pap\",\"ary\",\"kxi\",\"csb\",\"pag\",\"cos\",\"rif\",\"kek\",\"krc\",\"aii\",\"ban\",\"ssw\",\"tvl\",\"mfe\",\"tah\",\"bvy\",\"bcl\",\"hnj\",\"nau\",\"nst\",\"afb\",\"quc\",\"min\",\"tmw\",\"mad\",\"bjn\",\"mai\",\"cjy\",\"got\",\"hsn\",\"gan\",\"tzl\",\"dws\",\"ldn\",\"afh\",\"sgs\",\"krl\",\"vep\",\"rue\",\"tly\",\"mic\",\"ext\",\"izh\",\"sma\",\"jam\",\"cmo\",\"mwl\",\"kpv\",\"koi\",\"bis\",\"ike\",\"run\",\"evn\",\"ryu\",\"mnc\",\"aoz\",\"otk\",\"kas\",\"aln\",\"akl\",\"yua\",\"shy\",\"fkv\",\"gos\",\"fij\",\"thv\",\"zgh\",\"gcf\",\"cay\",\"xmf\",\"tig\",\"div\",\"lij\",\"rap\",\"hrx\",\"cpi\",\"tts\",\"gaa\",\"tmr\",\"iii\",\"ltg\",\"bzt\",\"syc\",\"emx\",\"gom\",\"chg\",\"osp\",\"stq\",\"frr\",\"fro\",\"nys\",\"toi\",\"new\",\"phn\",\"jpa\",\"rel\",\"drt\",\"chn\",\"pli\",\"laa\",\"bal\",\"hdn\",\"hax\",\"mik\",\"ajp\",\"xqa\",\"pal\",\"crk\",\"mni\",\"lut\",\"ayl\",\"ood\",\"sdh\",\"ofs\",\"nus\",\"kiu\",\"diq\",\"qxq\",\"alt\",\"bfz\",\"klj\",\"mus\",\"srn\",\"guc\",\"lim\",\"zea\",\"shi\",\"mnr\",\"bom\",\"sat\",\"szl\"]\r\nfeatures = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})\r\nnum_labels = features['label'].num_classes\r\ndata_files = { \"train\": \"train.csv\", \"test\": \"test.csv\" }\r\nsentences = load_dataset(\r\n \"loretoparisi\/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n)\r\n# You can make this part faster with num_proc=\r\nsentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\nsentences = sentences.shuffle()\r\n```\r\n\r\n## Expected results\r\nProperly tokenize dataset file `test.csv` without issues.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n```\r\nDownloading data files: 100%\r\n2\/2 [00:16<00:00, 7.34s\/it]\r\nDownloading data: 100%\r\n391M\/391M [00:12<00:00, 36.6MB\/s]\r\nDownloading data: 100%\r\n92.4M\/92.4M [00:02<00:00, 40.0MB\/s]\r\nExtracting data files: 100%\r\n2\/2 [00:00<00:00, 47.66it\/s]\r\nDataset csv downloaded and prepared to \/root\/.cache\/huggingface\/datasets\/csv\/loretoparisi--tatoeba-sentences-efeff8965c730a2c\/0.0.0\/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.\r\n100%\r\n2\/2 [00:00<00:00, 25.94it\/s]\r\n11%\r\n942339\/8256449 [01:55<13:11, 9245.85ex\/s]\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in ()\r\n 12 )\r\n 13 # You can make this part faster with num_proc=\r\n---> 14 sentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n 15 sentences = sentences.shuffle()\r\n\r\n10 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/features\/features.py](https:\/\/localhost:8080\/#) in str2int(self, values)\r\n 852 if value not in self._str2int:\r\n 853 value = str(value).strip()\r\n--> 854 output.append(self._str2int[str(value)])\r\n 855 else:\r\n 856 # No names provided, try to integerize\r\n\r\nKeyError: '\\\\N'\r\n```\r\n\r\n## Environment info\r\n```\r\n- `datasets` version: 2.1.0\r\n- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n- ```\r\n\r\n\r\n```\r\n- `transformers` version: 4.18.0\r\n- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- Huggingface_hub version: 0.5.1\r\n- PyTorch version (GPU?): 1.11.0+cu113 (True)\r\n- Tensorflow version (GPU?): 2.8.0 (True)\r\n- Flax version (CPU?\/GPU?\/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n- ```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4238\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4238\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4237","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4237\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4237\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4237\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4237","id":1217121044,"node_id":"I_kwDODunzps5Ii8sU","number":4237,"title":"Common Voice 8 doesn't show datasets viewer","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting. I understand it's an error in the dataset script. To reproduce:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> split_names = ds.get_dataset_split_names(\"mozilla-foundation\/common_voice_8_0\", use_auth_token=\"**********\")\r\nDownloading builder script: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10.9k\/10.9k [00:00<00:00, 10.9MB\/s]\r\nDownloading extra modules: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.98k\/2.98k [00:00<00:00, 3.36MB\/s]\r\nDownloading extra modules: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 53.1k\/53.1k [00:00<00:00, 650kB\/s]\r\nNo config specified, defaulting to: common_voice\/en\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/libs\/libmodels\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/mozilla-foundation--common_voice_8_0\/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3\/common_voice_8_0.py\", line 153, in _split_generators\r\n self._log_download(self.config.name, bundle_version, hf_auth_token)\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/mozilla-foundation--common_voice_8_0\/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3\/common_voice_8_0.py\", line 139, in _log_download\r\n email = HfApi().whoami(auth_token)[\"email\"]\r\nKeyError: 'email'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/libs\/libmodels\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/libs\/libmodels\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```","Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.\r\n\r\nUnfortunately I'm not able to reproduce the error.\r\n\r\nI think the error has to do with authentication with `huggingface_hub`, because the exception is thrown from these code lines: https:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_8_0\/blob\/main\/common_voice_8_0.py#L137-L139\r\n```python\r\nfrom huggingface_hub import HfApi, HfFolder\r\n\r\nif isinstance(auth_token, bool):\r\n email = HfApi().whoami(auth_token)\r\nemail = HfApi().whoami(auth_token)[\"email\"]\r\n```\r\n\r\nCould you please verify the previous code with the `auth_token` you pass to `load_dataset(..., use_auth_token=auth_token,...`?","OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!\r\n\r\n```python\r\n>>> from huggingface_hub import HfApi, HfFolder\r\n>>> auth_token = \"hf_app_******\"\r\n>>> t = HfApi().whoami(auth_token)\r\n>>> t\r\n{'type': 'app', 'name': 'dataset-preview-backend'}\r\n>>> t[\"email\"]\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\nKeyError: 'email'\r\n```\r\n\r\nNote also that the doc (https:\/\/huggingface.co\/docs\/huggingface_hub\/package_reference\/hf_api#huggingface_hub.HfApi.whoami) does not state that `whoami` should return an `email` key.\r\n\r\n@SBrandeis @julien-c: do you think the app token should have an email associated, like the users?","We can workaround this with\r\n```python\r\nemail = HfApi().whoami(auth_token).get(\"email\", \"system@huggingface.co\")\r\n```\r\nin the common voice scripts","Hmmm, does this mean that any person who downloads the common voice dataset will be logged as \"system@huggingface.co\"? If so, it would defeat the purpose of sending the user's email to the commonvoice API, right?","I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.\r\n\r\nAdditionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nCC: @patrickvonplaten @lhoestq @SBrandeis @julien-c ","Hmm I don't agree here. \r\n\r\nAnybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the \"correct\" email but to just whatever and it would work.\r\n\r\nNote that someone only has visibility on the code after having \"signed\" the access-mechanism so I think we can expect the users to have agreed to not do anything malicious. \r\n\r\nI'm fine with both @lhoestq's solution or we find a way that forces the user to be logged in + being able to load the data for the datasets viewer. Wdyt @lhoestq @severo @albertvillanova ?","> Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nYes, I agree we can forget about this @patrickvonplaten. After having had a look at Common Voice website, I've seen they only require sending an email (no auth is inplace on their side, contrary to what I had previously thought). Therefore, currently we impose stronger requirements than them: we require the user having logged in and accepted the access mechanism.\r\n\r\nCurrently the script as it is already requires the user being logged in:\r\n```python\r\nHfApi().whoami(auth_token)\r\n```\r\nthrows an exception if None\/invalid auth_token is passed.\r\n\r\nOn the other hand, we should agree on the way to allow the viewer to stream the data.","The preview is back now, thanks !"],"created_at":1651053920000,"updated_at":1652185025000,"closed_at":1652185024000,"author_association":"MEMBER","active_lock_reason":null,"body":"https:\/\/huggingface.co\/datasets\/mozilla-foundation\/common_voice_8_0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4237\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4237\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4236","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4236\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4236\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4236\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4236","id":1217115691,"node_id":"PR_kwDODunzps423MOc","number":4236,"title":"Replace data URL in big_patent dataset and support streaming","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I first uploaded the data files to the Hub: I think it is a good option because we have git lfs to track versions and changes. Moreover people will be able to make PRs to propose updates on the data files.\r\n- I would have preferred to upload it it to the \"data\" org namespace, but it is already taken (although not used): might be possible to take it?\r\n\r\nAs an alternative (and to be consistent with previous datasets), I also uploaded the data files to our AWS bucket.\r\n\r\nWe should decide which to use (now and for future datasets) and set it here before merging. We should remove the data files for the non-chosen option.\r\n\r\nCC: @lhoestq @mariosasko @polinaeterna ","Would it make sense to make the dataset a community one (so, create an organization for it) and store the script and the data in a single repository? Just as it is for most of the datasets. That way we can also access the data using a relative path inside the repo (that's not the point though). The point is that to me it seems a bit more straightforward to store everything in one place. \r\n\r\nI guess the strong argument against this logic is that in this case the canonical version won't work... But maybe there is some redirecting mechanism I don't know about? :)\r\n\r\nAnyway, I'm in favor of hosting data on the Hub instead of AWS :) ","I also think storing everything in one place\/single repository is the best option.\r\n\r\n@polinaeterna Canonical datasets also support data files (see the [`red_caps` repo](https:\/\/huggingface.co\/datasets\/red_caps\/tree\/main) for instance) ","Thanks @polinaeterna and @mariosasko for your comments.\r\n\r\nYes, definitely it is much better to have everything in the same repo. \r\n\r\nI'm transferring their data files to the Hub under \"big_patent\" and deleting them from the other repo and AWS."],"created_at":1651053673000,"updated_at":1654848655000,"closed_at":1651515675000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub.\r\n\r\nMoreover, this PR makes the dataset streamable.\r\n\r\nFix #4217.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4236\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4236\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4236","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4236","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4236.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4236.patch","merged_at":1651515675000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4235","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4235\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4235\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4235\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4235","id":1216952640,"node_id":"I_kwDODunzps5IiTlA","number":4235,"title":"How to load VERY LARGE dataset?","user":{"login":"CaoYiqingT","id":45160643,"node_id":"MDQ6VXNlcjQ1MTYwNjQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45160643?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/CaoYiqingT","html_url":"https:\/\/github.com\/CaoYiqingT","followers_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/followers","following_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/orgs","repos_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/repos","events_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/CaoYiqingT\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The `Trainer` support `IterableDataset`, not just datasets."],"created_at":1651045813000,"updated_at":1651059857000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"### System Info\n\n```shell\nI am using transformer trainer while meeting the issue.\r\nThe trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency. \r\nI wonder if there are any tricks like Sharding in huggingface trainer.\r\nLooking forward to your reply.\n```\n\n\n### Who can help?\n\nTrainer: @sgugger\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE\/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nNone\n\n### Expected behavior\n\n```shell\nI wonder if there are any tricks like fairseq Sharding very large datasets https:\/\/fairseq.readthedocs.io\/en\/latest\/getting_started.html.\r\nThanks a lot!\n```\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4235\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4235\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4234","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4234\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4234\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4234\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4234","id":1216818846,"node_id":"PR_kwDODunzps422Mwn","number":4234,"title":"Autoeval config","user":{"login":"nazneenrajani","id":3278583,"node_id":"MDQ6VXNlcjMyNzg1ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3278583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nazneenrajani","html_url":"https:\/\/github.com\/nazneenrajani","followers_url":"https:\/\/api.github.com\/users\/nazneenrajani\/followers","following_url":"https:\/\/api.github.com\/users\/nazneenrajani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nazneenrajani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nazneenrajani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nazneenrajani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nazneenrajani\/orgs","repos_url":"https:\/\/api.github.com\/users\/nazneenrajani\/repos","events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Related to: https:\/\/github.com\/huggingface\/autonlp-backend\/issues\/414 and https:\/\/github.com\/huggingface\/autonlp-backend\/issues\/424","The tests are failing due to the changed metadata:\r\n\r\n```\r\ngot an unexpected keyword argument 'train-eval-index'\r\n```\r\n\r\nI think you can fix this by updating the `DatasetMetadata` class and implementing an appropriate `validate_train_eval_index()` function\r\n\r\n@lhoestq we are working with an arbitrary set of tags for `autoeval config`. See https:\/\/github.com\/huggingface\/autonlp-backend\/issues\/414\r\nI need to add a validator function though for the tests to pass. Our set is not well-defined as in the rest https:\/\/github.com\/huggingface\/datasets\/tree\/master\/src\/datasets\/utils\/resources. What's a workaround for this?","On the question of validating the `train-eval-index` metadata, I think the simplest approach would be to validate that the required fields exist and not worry about their values (which are open-ended).\r\n\r\nFor me, the required fields include:\r\n\r\n* `config`\r\n* `task`\r\n* `task_id`\r\n* `splits` (train \/ validation \/ eval)\r\n* `col_mapping`\r\n* `metrics` (checking that each one has `type`, `name`) \r\n\r\nHere I'm using the spec defined in https:\/\/github.com\/huggingface\/autonlp-backend\/issues\/414 as a guide.\r\n\r\nWDYT @lhoestq ?","Makes sense ! Currently the metadata type validator doesn't support subfields - let me open a PR to add it","I ended up improving the metadata validation in this PR x)\r\n\r\nIn particular:\r\n- I added support YAML keys with dashes instead of underscores for `train-eval-index`\r\n- I added `train-eval-index` validation with `validate_train_eval_index`. It does nothing fancy, it just checks that it is a list if it exists in the YAML, but feel free to improve it if you want\r\n\r\nLet me know if it sounds good to you ! I think we can improve `validate_train_eval_index` in another PR","Come on windows... I didn't do anything advanced...\r\n\r\nAnyway, will try to fix this when I get back home x)","> Come on windows... I didn't do anything advanced...\r\n> \r\n> Anyway, will try to fix this when I get back home x)\r\n\r\nHehe, thanks!","Thanks, @lhoestq this is great! ","Did I just fix it for windows and now it fails on linux ? xD","> Did I just fix it for windows and now it fails on linux ? xD\r\n\r\nLooks like the Heisenberg uncertainty principle is at play here - you cannot simultaneously have unit tests passing in both Linux and Windows \ud83d\ude05 ","The worst is that the tests pass locally both on my windows and my linux x)","Ok fixed it, the issue came from python 3.6 that doesn't return the right `__origin__` for Dict and List types","> Alright thanks for adding the first Autoeval config ! :D\r\n\r\nWoohoo! Thank you so much \ud83e\udd17 ","This is cool!"],"created_at":1651037530000,"updated_at":1651843231000,"closed_at":1651774858000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Added autoeval config to imdb as pilot","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4234\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4234\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4234","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4234","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4234.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4234.patch","merged_at":1651774858000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4233","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4233\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4233\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4233\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4233","id":1216665044,"node_id":"PR_kwDODunzps421r-6","number":4233,"title":"Autoeval","user":{"login":"nazneenrajani","id":3278583,"node_id":"MDQ6VXNlcjMyNzg1ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3278583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nazneenrajani","html_url":"https:\/\/github.com\/nazneenrajani","followers_url":"https:\/\/api.github.com\/users\/nazneenrajani\/followers","following_url":"https:\/\/api.github.com\/users\/nazneenrajani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nazneenrajani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nazneenrajani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nazneenrajani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nazneenrajani\/orgs","repos_url":"https:\/\/api.github.com\/users\/nazneenrajani\/repos","events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4233). All of your documentation changes will be reflected on that endpoint."],"created_at":1651023129000,"updated_at":1651037370000,"closed_at":1651023143000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4233\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4233\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4233","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4233","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4233.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4233.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4232","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4232\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4232\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4232\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4232","id":1216659444,"node_id":"PR_kwDODunzps421qz4","number":4232,"title":"adding new tag to tasks.json and modified for existing datasets","user":{"login":"nazneenrajani","id":3278583,"node_id":"MDQ6VXNlcjMyNzg1ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3278583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nazneenrajani","html_url":"https:\/\/github.com\/nazneenrajani","followers_url":"https:\/\/api.github.com\/users\/nazneenrajani\/followers","following_url":"https:\/\/api.github.com\/users\/nazneenrajani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nazneenrajani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nazneenrajani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nazneenrajani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nazneenrajani\/orgs","repos_url":"https:\/\/api.github.com\/users\/nazneenrajani\/repos","events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","closing in favor of https:\/\/github.com\/huggingface\/datasets\/pull\/4244"],"created_at":1651022469000,"updated_at":1651587836000,"closed_at":1651587399000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4232\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4232\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4232","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4232","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4232.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4232.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4231","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4231\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4231\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4231\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4231","id":1216651960,"node_id":"PR_kwDODunzps421pUX","number":4231,"title":"Fix invalid url to CC-Aligned dataset","user":{"login":"juntang-zhuang","id":44451229,"node_id":"MDQ6VXNlcjQ0NDUxMjI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44451229?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/juntang-zhuang","html_url":"https:\/\/github.com\/juntang-zhuang","followers_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/followers","following_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/orgs","repos_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/repos","events_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/juntang-zhuang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651021621000,"updated_at":1652720473000,"closed_at":1652719992000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"The CC-Aligned dataset url has changed to https:\/\/data.statmt.org\/cc-aligned\/, the old address http:\/\/www.statmt.org\/cc-aligned\/ is no longer valid","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4231\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4231\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4231","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4231","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4231.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4231.patch","merged_at":1652719992000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4230","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4230\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4230\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4230\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4230","id":1216643661,"node_id":"I_kwDODunzps5IhIJN","number":4230,"title":"Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?","user":{"login":"beyondguo","id":37113676,"node_id":"MDQ6VXNlcjM3MTEzNjc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37113676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/beyondguo","html_url":"https:\/\/github.com\/beyondguo","followers_url":"https:\/\/api.github.com\/users\/beyondguo\/followers","following_url":"https:\/\/api.github.com\/users\/beyondguo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/beyondguo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/beyondguo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/beyondguo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/beyondguo\/orgs","repos_url":"https:\/\/api.github.com\/users\/beyondguo\/repos","events_url":"https:\/\/api.github.com\/users\/beyondguo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/beyondguo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting @beyondguo.\r\n\r\nIndeed, we generate this dataset from this raw data file URL: https:\/\/data.deepai.org\/conll2003.zip\r\nAnd that URL only contains the English version.","The German data requires payment\r\n\r\nThe [original task page](https:\/\/www.clips.uantwerpen.be\/conll2003\/ner\/) states \"The German data is a collection of articles from the Frankfurter Rundschau. The named entities have been annotated by people of the University of Antwerp. Only the annotations are available here. In order to build these data sets you need access to the ECI Multilingual Text Corpus. It can be ordered from the Linguistic Data Consortium (2003 non-member price: US$ 35.00).\"\r\n\r\nInflation since 2003 has also affected LDC's prices, and today the dataset [LDC94T5](https:\/\/catalog.ldc.upenn.edu\/LDC94T5) is available under license for $75 a copy. The [license](https:\/\/catalog.ldc.upenn.edu\/license\/eci-slash-mci-user-agreement.pdf) includes a non-distribution condition, which is probably why the data has not turned up openly.\r\n\r\nThe ACL hold copyright of this data; I'll mail them and anyone I can find at ECI to see if they'll open this up now. After all, it worked with Microsoft 3DMM, why not here too, after 28 years? :)\r\n"],"created_at":1651020832000,"updated_at":1652358222000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"![image](https:\/\/user-images.githubusercontent.com\/37113676\/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png)\r\n\r\nBut on huggingface datasets:\r\n![image](https:\/\/user-images.githubusercontent.com\/37113676\/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png)\r\n\r\nWhere is the German data?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4230\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4230\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4229","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4229\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4229\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4229\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4229","id":1216638968,"node_id":"PR_kwDODunzps421mjM","number":4229,"title":"new task tag","user":{"login":"nazneenrajani","id":3278583,"node_id":"MDQ6VXNlcjMyNzg1ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3278583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nazneenrajani","html_url":"https:\/\/github.com\/nazneenrajani","followers_url":"https:\/\/api.github.com\/users\/nazneenrajani\/followers","following_url":"https:\/\/api.github.com\/users\/nazneenrajani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nazneenrajani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nazneenrajani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nazneenrajani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nazneenrajani\/orgs","repos_url":"https:\/\/api.github.com\/users\/nazneenrajani\/repos","events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1651020428000,"updated_at":1651020508000,"closed_at":1651020497000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"multi-input-text-classification tag for classification datasets that take more than one input","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4229\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4229\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4229","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4229","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4229.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4229.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4228","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4228\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4228\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4228\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4228","id":1216523043,"node_id":"PR_kwDODunzps421NKL","number":4228,"title":"new task tag","user":{"login":"nazneenrajani","id":3278583,"node_id":"MDQ6VXNlcjMyNzg1ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3278583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nazneenrajani","html_url":"https:\/\/github.com\/nazneenrajani","followers_url":"https:\/\/api.github.com\/users\/nazneenrajani\/followers","following_url":"https:\/\/api.github.com\/users\/nazneenrajani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nazneenrajani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nazneenrajani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nazneenrajani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nazneenrajani\/orgs","repos_url":"https:\/\/api.github.com\/users\/nazneenrajani\/repos","events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1651010433000,"updated_at":1651020511000,"closed_at":1651020391000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"multi-input-text-classification tag for classification datasets that take more than one input","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4228\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4228\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4228","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4228","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4228.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4228.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4227","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4227\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4227\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4227\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4227","id":1216455316,"node_id":"PR_kwDODunzps420-mc","number":4227,"title":"Add f1 metric card, update docstring in py file","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1651005663000,"updated_at":1651582223000,"closed_at":1651581813000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4227\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4227\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4227","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4227","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4227.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4227.patch","merged_at":1651581813000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4226","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4226\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4226\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4226\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4226","id":1216331073,"node_id":"PR_kwDODunzps420kAv","number":4226,"title":"Add pearsonr mc, update functionality to match the original docs","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","thank you @lhoestq!! :hugs: "],"created_at":1650997846000,"updated_at":1651597764000,"closed_at":1651597348000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"- adds pearsonr metric card\r\n- adds ability to return p-value\r\n - p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4226\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4226\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4226","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4226","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4226.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4226.patch","merged_at":1651597348000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4225","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4225\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4225\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4225\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4225","id":1216213464,"node_id":"PR_kwDODunzps420LNM","number":4225,"title":"autoeval config","user":{"login":"nazneenrajani","id":3278583,"node_id":"MDQ6VXNlcjMyNzg1ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3278583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nazneenrajani","html_url":"https:\/\/github.com\/nazneenrajani","followers_url":"https:\/\/api.github.com\/users\/nazneenrajani\/followers","following_url":"https:\/\/api.github.com\/users\/nazneenrajani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nazneenrajani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nazneenrajani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nazneenrajani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nazneenrajani\/orgs","repos_url":"https:\/\/api.github.com\/users\/nazneenrajani\/repos","events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1650991114000,"updated_at":1651020511000,"closed_at":1651010426000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"add train eval index for autoeval","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4225\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4225\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4225","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4225","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4225.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4225.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4224","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4224\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4224\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4224\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4224","id":1216209667,"node_id":"PR_kwDODunzps420KX2","number":4224,"title":"autoeval config","user":{"login":"nazneenrajani","id":3278583,"node_id":"MDQ6VXNlcjMyNzg1ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3278583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nazneenrajani","html_url":"https:\/\/github.com\/nazneenrajani","followers_url":"https:\/\/api.github.com\/users\/nazneenrajani\/followers","following_url":"https:\/\/api.github.com\/users\/nazneenrajani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nazneenrajani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nazneenrajani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nazneenrajani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nazneenrajani\/orgs","repos_url":"https:\/\/api.github.com\/users\/nazneenrajani\/repos","events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nazneenrajani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1650990919000,"updated_at":1650991005000,"closed_at":1650991005000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"add train eval index for autoeval","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4224\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4224\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4224","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4224","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4224.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4224.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4223","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4223\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4223\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4223\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4223","id":1216107082,"node_id":"PR_kwDODunzps42z0YV","number":4223,"title":"Add Accuracy Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650985846000,"updated_at":1651588065000,"closed_at":1651587647000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"- adds accuracy metric card\r\n- updates docstring in accuracy.py\r\n- adds .json file with metric card and docstring information","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4223\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4223\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4223","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4223","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4223.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4223.patch","merged_at":1651587647000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4222","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4222\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4222\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4222\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4222","id":1216056439,"node_id":"PR_kwDODunzps42zpcd","number":4222,"title":"Fix description links in dataset cards","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Non passing tests are due to other pre-existing errors in dataset cards: not related to this PR."],"created_at":1650983785000,"updated_at":1651826318000,"closed_at":1650991949000,"author_association":"MEMBER","active_lock_reason":null,"body":"I noticed many links were not properly displayed (only text, no link) on the Hub because of wrong syntax, e.g.: https:\/\/huggingface.co\/datasets\/big_patent\r\n\r\nThis PR fixes all description links in dataset cards.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4222\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4222\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4222","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4222","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4222.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4222.patch","merged_at":1650991949000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4221","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4221\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4221\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4221\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4221","id":1215911182,"node_id":"I_kwDODunzps5IeVUO","number":4221,"title":"Dictionary Feature","user":{"login":"jordiae","id":2944532,"node_id":"MDQ6VXNlcjI5NDQ1MzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2944532?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jordiae","html_url":"https:\/\/github.com\/jordiae","followers_url":"https:\/\/api.github.com\/users\/jordiae\/followers","following_url":"https:\/\/api.github.com\/users\/jordiae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jordiae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jordiae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jordiae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jordiae\/orgs","repos_url":"https:\/\/api.github.com\/users\/jordiae\/repos","events_url":"https:\/\/api.github.com\/users\/jordiae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jordiae\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n],\r\n```\r\n\r\nFeel free to re-open this issue if that does not work for your use case.","> Hi @jordiae,\r\n> \r\n> Instead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n> \r\n> ```python\r\n> \"list_of_dict_feature\": [\r\n> {\r\n> \"key1_in_dict\": datasets.Value(\"string\"),\r\n> \"key2_in_dict\": datasets.Value(\"int32\"),\r\n> ...\r\n> }\r\n> ],\r\n> ```\r\n> \r\n> Feel free to re-open this issue if that does not work for your use case.\r\n\r\nThank you"],"created_at":1650977418000,"updated_at":1651243939000,"closed_at":1651165498000,"author_association":"NONE","active_lock_reason":null,"body":"Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?\r\n\r\nThank you in advance.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4221\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4221\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4220","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4220\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4220\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4220\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4220","id":1215225802,"node_id":"PR_kwDODunzps42w5YO","number":4220,"title":"Altered faiss installation comment","user":{"login":"vishalsrao","id":36671559,"node_id":"MDQ6VXNlcjM2NjcxNTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36671559?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vishalsrao","html_url":"https:\/\/github.com\/vishalsrao","followers_url":"https:\/\/api.github.com\/users\/vishalsrao\/followers","following_url":"https:\/\/api.github.com\/users\/vishalsrao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vishalsrao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vishalsrao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vishalsrao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vishalsrao\/orgs","repos_url":"https:\/\/api.github.com\/users\/vishalsrao\/repos","events_url":"https:\/\/api.github.com\/users\/vishalsrao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vishalsrao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hi ! Can you explain why this change is needed ?","Facebook recommends installing FAISS using conda (https:\/\/github.com\/facebookresearch\/faiss\/blob\/main\/INSTALL.md). pip does not seem to have the latest version of FAISS. The latest version of faiss is 1.7.2 (https:\/\/anaconda.org\/conda-forge\/faiss), but the latest one available through pip is 1.5.3 (https:\/\/pypi.org\/project\/faiss\/). "],"created_at":1650936043000,"updated_at":1652117374000,"closed_at":1652116929000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4220\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4220\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4220","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4220","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4220.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4220.patch","merged_at":1652116929000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4219","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4219\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4219\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4219\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4219","id":1214934025,"node_id":"PR_kwDODunzps42v6rE","number":4219,"title":"Add F1 Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650914096000,"updated_at":1651005858000,"closed_at":1651005466000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4219\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4219\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4219","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4219","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4219.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4219.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4218","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4218\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4218\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4218\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4218","id":1214748226,"node_id":"PR_kwDODunzps42vTA0","number":4218,"title":"Make code for image downloading from image urls cacheable","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650903479000,"updated_at":1650992424000,"closed_at":1650980306000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4199 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4218\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4218\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4218","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4218","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4218.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4218.patch","merged_at":1650980306000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4217","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4217\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4217\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4217\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4217","id":1214688141,"node_id":"I_kwDODunzps5IZquN","number":4217,"title":"Big_Patent dataset broken","user":{"login":"Matthew-Larsen","id":54189843,"node_id":"MDQ6VXNlcjU0MTg5ODQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54189843?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Matthew-Larsen","html_url":"https:\/\/github.com\/Matthew-Larsen","followers_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/followers","following_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/repos","events_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Matthew-Larsen\/received_events","type":"User","site_admin":false},"labels":[{"id":4069435429,"node_id":"LA_kwDODunzps7yjqgl","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/hosted-on-google-drive","name":"hosted-on-google-drive","color":"8B51EF","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.\r\n\r\nSee related issues: https:\/\/github.com\/huggingface\/datasets\/issues?q=is%3Aissue+is%3Aopen+drive.google.com\r\n\r\nTo quote [@lhoestq](https:\/\/github.com\/huggingface\/datasets\/issues\/4075#issuecomment-1087362551):\r\n\r\n> PS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.\r\n\r\n","We should find out if the dataset license allows redistribution and contact the data owners to propose them to host their data on our Hub.","The data owners have agreed on hosting their data on the Hub."],"created_at":1650900705000,"updated_at":1653546583000,"closed_at":1651515675000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for '*big_patent*'\r\n\r\n**Link:** *[link to the dataset viewer page](https:\/\/huggingface.co\/datasets\/big_patent\/viewer\/all\/train)*\r\n\r\n*Unable to view because it says FileNotFound, also cannot download it through the python API*\r\n\r\nAm I the one who added this dataset ? No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4217\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4217\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4216","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4216\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4216\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4216\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4216","id":1214614029,"node_id":"PR_kwDODunzps42u1_w","number":4216,"title":"Avoid recursion error in map if example is returned as dict value","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650897632000,"updated_at":1651684806000,"closed_at":1651684372000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I noticed this bug while answering [this question](https:\/\/discuss.huggingface.co\/t\/correct-way-to-create-a-dataset-from-a-csv-file\/15686\/11?u=mariosasko). \r\n\r\nThis code replicates the bug:\r\n```python\r\nfrom datasets import Dataset\r\ndset = Dataset.from_dict({\"en\": [\"aa\", \"bb\"], \"fr\": [\"cc\", \"dd\"]})\r\ndset.map(lambda ex: {\"translation\": ex})\r\n```\r\nand this is the fix for it (before this PR):\r\n```python\r\nfrom datasets import Dataset\r\ndset = Dataset.from_dict({\"en\": [\"aa\", \"bb\"], \"fr\": [\"cc\", \"dd\"]})\r\ndset.map(lambda ex: {\"translation\": dict(ex)})\r\n```\r\n\r\nInternally, this can be fixed by merging two dicts via dict unpacking (instead of `dict.update) `in `Dataset.map`, which avoids creating recursive dictionaries.\r\n\r\nP.S. `{**a, **b}` is slightly more performant than `a.update(b)` in my bencmarks.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4216\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4216\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4216","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4216","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4216.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4216.patch","merged_at":1651684372000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4215","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4215\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4215\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4215\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4215","id":1214579162,"node_id":"PR_kwDODunzps42uuhY","number":4215,"title":"Add `drop_last_batch` to `IterableDataset.map`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650896119000,"updated_at":1651593367000,"closed_at":1651592934000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Addresses this comment: https:\/\/github.com\/huggingface\/datasets\/pull\/3801#pullrequestreview-901736921","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4215\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4215\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4215","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4215","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4215.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4215.patch","merged_at":1651592934000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4214","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4214\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4214\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4214\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4214","id":1214572430,"node_id":"PR_kwDODunzps42utC5","number":4214,"title":"Skip checksum computation in Imagefolder by default","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650895841000,"updated_at":1651591712000,"closed_at":1651591289000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Avoids having to set `ignore_verifications=True` in `load_dataset(\"imagefolder\", ...)` to skip checksum verification and speed up loading.\r\n\r\nThe user can still pass `DownloadConfig(record_checksums=True)` to not skip this part. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4214\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4214\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4214","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4214","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4214.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4214.patch","merged_at":1651591289000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4213","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4213\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4213\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4213\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4213","id":1214510010,"node_id":"PR_kwDODunzps42uft_","number":4213,"title":"ETT time series dataset","user":{"login":"kashif","id":8100,"node_id":"MDQ6VXNlcjgxMDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8100?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kashif","html_url":"https:\/\/github.com\/kashif","followers_url":"https:\/\/api.github.com\/users\/kashif\/followers","following_url":"https:\/\/api.github.com\/users\/kashif\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kashif\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kashif\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kashif\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kashif\/orgs","repos_url":"https:\/\/api.github.com\/users\/kashif\/repos","events_url":"https:\/\/api.github.com\/users\/kashif\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kashif\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","thank you!\r\n"],"created_at":1650893178000,"updated_at":1651753161000,"closed_at":1651752635000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Ready for review.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4213\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4213\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4213","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4213","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4213.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4213.patch","merged_at":1651752635000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4212","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4212\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4212\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4212\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4212","id":1214498582,"node_id":"PR_kwDODunzps42udRf","number":4212,"title":"[Common Voice] Make sure bytes are correctly deleted if `path` exists","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","cool that you noticed that we store unnecessary bytes again :D "],"created_at":1650892706000,"updated_at":1651013668000,"closed_at":1651013307000,"author_association":"MEMBER","active_lock_reason":null,"body":"`path` should be set to local path inside audio feature if exist so that bytes can correctly be deleted.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4212\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4212\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4212","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4212","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4212.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4212.patch","merged_at":1651013307000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4211","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4211\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4211\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4211\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4211","id":1214361837,"node_id":"I_kwDODunzps5IYbDt","number":4211,"title":"DatasetDict containing Datasets with different features when pushed to hub gets remapped features","user":{"login":"pietrolesci","id":61748653,"node_id":"MDQ6VXNlcjYxNzQ4NjUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/61748653?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pietrolesci","html_url":"https:\/\/github.com\/pietrolesci","followers_url":"https:\/\/api.github.com\/users\/pietrolesci\/followers","following_url":"https:\/\/api.github.com\/users\/pietrolesci\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pietrolesci\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pietrolesci\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pietrolesci\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pietrolesci\/orgs","repos_url":"https:\/\/api.github.com\/users\/pietrolesci\/repos","events_url":"https:\/\/api.github.com\/users\/pietrolesci\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pietrolesci\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nHowever, for the moment `push_to_hub` does not support specifying different configurations. IMHO, we should implement this.","Hi @albertvillanova,\r\n\r\nThanks a lot for your reply! I got it now. The strange thing for me was to have it correctly working (i.e., DatasetDict with different features in some datasets) locally and not on the Hub. It would be great to have configuration supported by `push_to_hub`. Personally, this latter functionality allowed me to iterate rather quickly on dataset curation.\r\n\r\nAgain, thanks for your time @albertvillanova!\r\n\r\nBest,\r\nPietro","Hi! Yes, we should override `DatasetDict.__setitem__` and throw an error if features dictionaries are different. `DatasetDict` is a subclass of `dict`, so `DatasetDict.{update\/setdefault}` need to be overridden as well. We could avoid this by subclassing `UserDict`, but then we would get the name collision - `DatasetDict.data` vs. `UserDict.data`. This makes me think we should rename the `data` attribute of `DatasetDict`\/`Dataset` for easier dict subclassing (would also simplify https:\/\/github.com\/huggingface\/datasets\/pull\/3997) and to follow good Python practices. Another option is to have a custom `UserDict` class in `py_utils`, but it can be hard to keep this class consistent with the built-in `UserDict`. \r\n\r\n@albertvillanova @lhoestq wdyt?","I would keep things simple and keep subclassing dict. Regarding the features check, I guess this can be done only for `push_to_hub` right ? It is the only function right now that requires the underlying datasets to be splits (e.g. train\/test) and have the same features.\r\n\r\nNote that later you will be able to push datasets with different features as different dataset **configurations** (similarly to the [GLUE subsets](https:\/\/huggingface.co\/datasets\/glue) for example). We will work on this soon"],"created_at":1650885774000,"updated_at":1653059730000,"closed_at":1653059730000,"author_association":"NONE","active_lock_reason":null,"body":"Hi there,\r\n\r\nI am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.\r\n\r\nDataset and code to reproduce available [here](https:\/\/huggingface.co\/datasets\/pietrolesci\/robust_nli).\r\n\r\nIn short:\r\n\r\nI have 3 feature mapping\r\n```python\r\nTri_features = Features(\r\n {\r\n \"idx\": Value(dtype=\"int64\"),\r\n \"premise\": Value(dtype=\"string\"),\r\n \"hypothesis\": Value(dtype=\"string\"),\r\n \"label\": ClassLabel(num_classes=3, names=[\"entailment\", \"neutral\", \"contradiction\"]),\r\n }\r\n)\r\n\r\nEnt_features = Features(\r\n {\r\n \"idx\": Value(dtype=\"int64\"),\r\n \"premise\": Value(dtype=\"string\"),\r\n \"hypothesis\": Value(dtype=\"string\"),\r\n \"label\": ClassLabel(num_classes=2, names=[\"non-entailment\", \"entailment\"]),\r\n }\r\n)\r\n\r\nCon_features = Features(\r\n {\r\n \"idx\": Value(dtype=\"int64\"),\r\n \"premise\": Value(dtype=\"string\"),\r\n \"hypothesis\": Value(dtype=\"string\"),\r\n \"label\": ClassLabel(num_classes=2, names=[\"non-contradiction\", \"contradiction\"]),\r\n }\r\n)\r\n```\r\n\r\nThen I create different datasets\r\n\r\n```python\r\ndataset_splits = {}\r\n\r\nfor split in df[\"split\"].unique():\r\n print(split)\r\n df_split = df.loc[df[\"split\"] == split].copy()\r\n \r\n if split in Tri_dataset:\r\n df_split[\"label\"] = df_split[\"label\"].map({\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2})\r\n ds = Dataset.from_pandas(df_split, features=Tri_features)\r\n \r\n elif split in Ent_bin_dataset:\r\n df_split[\"label\"] = df_split[\"label\"].map({\"non-entailment\": 0, \"entailment\": 1})\r\n ds = Dataset.from_pandas(df_split, features=Ent_features)\r\n \r\n elif split in Con_bin_dataset:\r\n df_split[\"label\"] = df_split[\"label\"].map({\"non-contradiction\": 0, \"contradiction\": 1})\r\n ds = Dataset.from_pandas(df_split, features=Con_features)\r\n\r\n else:\r\n print(\"ERROR:\", split)\r\n dataset_splits[split] = ds\r\ndatasets = DatasetDict(dataset_splits)\r\n```\r\n\r\nI then push to hub\r\n\r\n```python\r\ndatasets.push_to_hub(\"pietrolesci\/robust_nli\", token=\"\")\r\n```\r\n\r\nFinally, I load it from the hub\r\n\r\n```python\r\ndatasets_loaded_from_hub = load_dataset(\"pietrolesci\/robust_nli\")\r\n```\r\n\r\nAnd I get that\r\n\r\n```python\r\ndatasets[\"LI_TS\"].features != datasets_loaded_from_hub[\"LI_TS\"].features\r\n```\r\n\r\nsince \r\n\r\n```python\r\n\"label\": ClassLabel(num_classes=2, names=[\"non-contradiction\", \"contradiction\"])\r\n```\r\n\r\ngets remapped to \r\n\r\n```python\r\n \"label\": ClassLabel(num_classes=3, names=[\"entailment\", \"neutral\", \"contradiction\"])\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4211\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4211\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4210","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4210\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4210\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4210\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4210","id":1214089130,"node_id":"I_kwDODunzps5IXYeq","number":4210,"title":"TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'","user":{"login":"loretoparisi","id":163333,"node_id":"MDQ6VXNlcjE2MzMzMw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/163333?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/loretoparisi","html_url":"https:\/\/github.com\/loretoparisi","followers_url":"https:\/\/api.github.com\/users\/loretoparisi\/followers","following_url":"https:\/\/api.github.com\/users\/loretoparisi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/loretoparisi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/loretoparisi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/loretoparisi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/loretoparisi\/orgs","repos_url":"https:\/\/api.github.com\/users\/loretoparisi\/repos","events_url":"https:\/\/api.github.com\/users\/loretoparisi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/loretoparisi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\",\"vie\",\"nld\",\"epo\",\"por\",\"tur\",\"heb\",\"hun\",\"ell\",\"ind\",\"ara\",\"arz\",\"fin\",\"bul\",\"yue\",\"swe\",\"ukr\",\"bel\",\"que\",\"ces\",\"swh\",\"nno\",\"wuu\",\"nob\",\"zsm\",\"est\",\"kat\",\"pol\",\"lat\",\"urd\",\"sqi\",\"isl\",\"fry\",\"afr\",\"ron\",\"fao\",\"san\",\"bre\",\"tat\",\"yid\",\"uig\",\"uzb\",\"srp\",\"qya\",\"dan\",\"pes\",\"slk\",\"eus\",\"cycl\",\"acm\",\"tgl\",\"lvs\",\"kaz\",\"hye\",\"hin\",\"lit\",\"ben\",\"cat\",\"bos\",\"hrv\",\"tha\",\"orv\",\"cha\",\"mon\",\"lzh\",\"scn\",\"gle\",\"mkd\",\"slv\",\"frm\",\"glg\",\"vol\",\"ain\",\"jbo\",\"tok\",\"ina\",\"nds\",\"mal\",\"tlh\",\"roh\",\"ltz\",\"oss\",\"ido\",\"gla\",\"mlt\",\"sco\",\"ast\",\"jav\",\"oci\",\"ile\",\"ota\",\"xal\",\"tel\",\"sjn\",\"nov\",\"khm\",\"tpi\",\"ang\",\"aze\",\"tgk\",\"tuk\",\"chv\",\"hsb\",\"dsb\",\"bod\",\"sme\",\"cym\",\"mri\",\"ksh\",\"kmr\",\"ewe\",\"kab\",\"ber\",\"tpw\",\"udm\",\"lld\",\"pms\",\"lad\",\"grn\",\"mlg\",\"xho\",\"pnb\",\"grc\",\"hat\",\"lao\",\"npi\",\"cor\",\"nah\",\"avk\",\"mar\",\"guj\",\"pan\",\"kir\",\"myv\",\"prg\",\"sux\",\"crs\",\"ckt\",\"bak\",\"zlm\",\"hil\",\"cbk\",\"chr\",\"nav\",\"lkt\",\"enm\",\"arq\",\"lin\",\"abk\",\"pcd\",\"rom\",\"gsw\",\"tam\",\"zul\",\"awa\",\"wln\",\"amh\",\"bar\",\"hbo\",\"mhr\",\"bho\",\"mrj\",\"ckb\",\"osx\",\"pfl\",\"mgm\",\"sna\",\"mah\",\"hau\",\"kan\",\"nog\",\"sin\",\"glv\",\"dng\",\"kal\",\"liv\",\"vro\",\"apc\",\"jdt\",\"fur\",\"che\",\"haw\",\"yor\",\"crh\",\"pdc\",\"ppl\",\"kin\",\"shs\",\"mnw\",\"tet\",\"sah\",\"kum\",\"ngt\",\"nya\",\"pus\",\"hif\",\"mya\",\"moh\",\"wol\",\"tir\",\"ton\",\"lzz\",\"oar\",\"lug\",\"brx\",\"non\",\"mww\",\"hak\",\"nlv\",\"ngu\",\"bua\",\"aym\",\"vec\",\"ibo\",\"tkl\",\"bam\",\"kha\",\"ceb\",\"lou\",\"fuc\",\"smo\",\"gag\",\"lfn\",\"arg\",\"umb\",\"tyv\",\"kjh\",\"oji\",\"cyo\",\"urh\",\"kzj\",\"pam\",\"srd\",\"lmo\",\"swg\",\"mdf\",\"gil\",\"snd\",\"tso\",\"sot\",\"zza\",\"tsn\",\"pau\",\"som\",\"egl\",\"ady\",\"asm\",\"ori\",\"dtp\",\"cho\",\"max\",\"kam\",\"niu\",\"sag\",\"ilo\",\"kaa\",\"fuv\",\"nch\",\"hoc\",\"iba\",\"gbm\",\"sun\",\"war\",\"mvv\",\"pap\",\"ary\",\"kxi\",\"csb\",\"pag\",\"cos\",\"rif\",\"kek\",\"krc\",\"aii\",\"ban\",\"ssw\",\"tvl\",\"mfe\",\"tah\",\"bvy\",\"bcl\",\"hnj\",\"nau\",\"nst\",\"afb\",\"quc\",\"min\",\"tmw\",\"mad\",\"bjn\",\"mai\",\"cjy\",\"got\",\"hsn\",\"gan\",\"tzl\",\"dws\",\"ldn\",\"afh\",\"sgs\",\"krl\",\"vep\",\"rue\",\"tly\",\"mic\",\"ext\",\"izh\",\"sma\",\"jam\",\"cmo\",\"mwl\",\"kpv\",\"koi\",\"bis\",\"ike\",\"run\",\"evn\",\"ryu\",\"mnc\",\"aoz\",\"otk\",\"kas\",\"aln\",\"akl\",\"yua\",\"shy\",\"fkv\",\"gos\",\"fij\",\"thv\",\"zgh\",\"gcf\",\"cay\",\"xmf\",\"tig\",\"div\",\"lij\",\"rap\",\"hrx\",\"cpi\",\"tts\",\"gaa\",\"tmr\",\"iii\",\"ltg\",\"bzt\",\"syc\",\"emx\",\"gom\",\"chg\",\"osp\",\"stq\",\"frr\",\"fro\",\"nys\",\"toi\",\"new\",\"phn\",\"jpa\",\"rel\",\"drt\",\"chn\",\"pli\",\"laa\",\"bal\",\"hdn\",\"hax\",\"mik\",\"ajp\",\"xqa\",\"pal\",\"crk\",\"mni\",\"lut\",\"ayl\",\"ood\",\"sdh\",\"ofs\",\"nus\",\"kiu\",\"diq\",\"qxq\",\"alt\",\"bfz\",\"klj\",\"mus\",\"srn\",\"guc\",\"lim\",\"zea\",\"shi\",\"mnr\",\"bom\",\"sat\",\"szl\"]\r\nfeatures = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})\r\nnum_labels = features['label'].num_classes\r\ndata_files = { \"train\": \"train.csv\", \"test\": \"test.csv\" }\r\nsentences = load_dataset(\r\n \"loretoparisi\/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n)\r\n# You can make this part faster with num_proc=\r\nsentences = sentences.map(lambda ex: features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None, features=features)\r\n```\r\n\r\n@lhoestq IIRC, I suggested adding `cast_to_storage` to `ClassLabel` + `table_cast` to the packaged loaders if the `ClassLabel`\/`Image`\/`Audio` type is present in `features` to avoid this kind of error, but your concern was speed. IMO shouldn't be a problem if we do `table_cast` only when these features are present.","I agree packaged loaders should support `ClassLabel` feature without throwing an error.","@albertvillanova @mariosasko thank you, with that change now I get\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in ()\r\n 11 )\r\n 12 # You can make this part faster with num_proc=\r\n---> 13 sentences = sentences.map(lambda ex: features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None, features=features)\r\n 14 sentences = sentences.shuffle()\r\n\r\n8 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py](https:\/\/localhost:8080\/#) in validate_function_output(processed_inputs, indices)\r\n 2193 if processed_inputs is not None and not isinstance(processed_inputs, (Mapping, pa.Table)):\r\n 2194 raise TypeError(\r\n-> 2195 f\"Provided `function` which is applied to all elements of table returns a variable of type {type(processed_inputs)}. Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.\"\r\n 2196 )\r\n 2197 elif isinstance(indices, list) and isinstance(processed_inputs, Mapping):\r\n\r\nTypeError: Provided `function` which is applied to all elements of table returns a variable of type . Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.\r\n```\r\n\r\nthe error is raised by [this](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/arrow_dataset.py#L2221)\r\n\r\n```\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py](https:\/\/localhost:8080\/#) in validate_function_output(processed_inputs, indices)\r\n```","@mariosasko changed it like\r\n\r\n```python\r\nsentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n```\r\n\r\nto avoid the above errorr.","Any update on this? Is this correct ?\r\n> @mariosasko changed it like\r\n> \r\n> ```python\r\n> sentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n> ```\r\n> \r\n> to avoid the above errorr.\r\n\r\n"],"created_at":1650871722000,"updated_at":1653999391000,"closed_at":1653999391000,"author_association":"NONE","active_lock_reason":null,"body":"### System Info\r\n\r\n```shell\r\n- `transformers` version: 4.18.0\r\n- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- Huggingface_hub version: 0.5.1\r\n- PyTorch version (GPU?): 1.10.0+cu111 (True)\r\n- Tensorflow version (GPU?): 2.8.0 (True)\r\n- Flax version (CPU?\/GPU?\/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n```\r\n\r\n\r\n### Who can help?\r\n\r\n@LysandreJik \r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE\/SQuAD, ...)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\n\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\",\"vie\",\"nld\",\"epo\",\"por\",\"tur\",\"heb\",\"hun\",\"ell\",\"ind\",\"ara\",\"arz\",\"fin\",\"bul\",\"yue\",\"swe\",\"ukr\",\"bel\",\"que\",\"ces\",\"swh\",\"nno\",\"wuu\",\"nob\",\"zsm\",\"est\",\"kat\",\"pol\",\"lat\",\"urd\",\"sqi\",\"isl\",\"fry\",\"afr\",\"ron\",\"fao\",\"san\",\"bre\",\"tat\",\"yid\",\"uig\",\"uzb\",\"srp\",\"qya\",\"dan\",\"pes\",\"slk\",\"eus\",\"cycl\",\"acm\",\"tgl\",\"lvs\",\"kaz\",\"hye\",\"hin\",\"lit\",\"ben\",\"cat\",\"bos\",\"hrv\",\"tha\",\"orv\",\"cha\",\"mon\",\"lzh\",\"scn\",\"gle\",\"mkd\",\"slv\",\"frm\",\"glg\",\"vol\",\"ain\",\"jbo\",\"tok\",\"ina\",\"nds\",\"mal\",\"tlh\",\"roh\",\"ltz\",\"oss\",\"ido\",\"gla\",\"mlt\",\"sco\",\"ast\",\"jav\",\"oci\",\"ile\",\"ota\",\"xal\",\"tel\",\"sjn\",\"nov\",\"khm\",\"tpi\",\"ang\",\"aze\",\"tgk\",\"tuk\",\"chv\",\"hsb\",\"dsb\",\"bod\",\"sme\",\"cym\",\"mri\",\"ksh\",\"kmr\",\"ewe\",\"kab\",\"ber\",\"tpw\",\"udm\",\"lld\",\"pms\",\"lad\",\"grn\",\"mlg\",\"xho\",\"pnb\",\"grc\",\"hat\",\"lao\",\"npi\",\"cor\",\"nah\",\"avk\",\"mar\",\"guj\",\"pan\",\"kir\",\"myv\",\"prg\",\"sux\",\"crs\",\"ckt\",\"bak\",\"zlm\",\"hil\",\"cbk\",\"chr\",\"nav\",\"lkt\",\"enm\",\"arq\",\"lin\",\"abk\",\"pcd\",\"rom\",\"gsw\",\"tam\",\"zul\",\"awa\",\"wln\",\"amh\",\"bar\",\"hbo\",\"mhr\",\"bho\",\"mrj\",\"ckb\",\"osx\",\"pfl\",\"mgm\",\"sna\",\"mah\",\"hau\",\"kan\",\"nog\",\"sin\",\"glv\",\"dng\",\"kal\",\"liv\",\"vro\",\"apc\",\"jdt\",\"fur\",\"che\",\"haw\",\"yor\",\"crh\",\"pdc\",\"ppl\",\"kin\",\"shs\",\"mnw\",\"tet\",\"sah\",\"kum\",\"ngt\",\"nya\",\"pus\",\"hif\",\"mya\",\"moh\",\"wol\",\"tir\",\"ton\",\"lzz\",\"oar\",\"lug\",\"brx\",\"non\",\"mww\",\"hak\",\"nlv\",\"ngu\",\"bua\",\"aym\",\"vec\",\"ibo\",\"tkl\",\"bam\",\"kha\",\"ceb\",\"lou\",\"fuc\",\"smo\",\"gag\",\"lfn\",\"arg\",\"umb\",\"tyv\",\"kjh\",\"oji\",\"cyo\",\"urh\",\"kzj\",\"pam\",\"srd\",\"lmo\",\"swg\",\"mdf\",\"gil\",\"snd\",\"tso\",\"sot\",\"zza\",\"tsn\",\"pau\",\"som\",\"egl\",\"ady\",\"asm\",\"ori\",\"dtp\",\"cho\",\"max\",\"kam\",\"niu\",\"sag\",\"ilo\",\"kaa\",\"fuv\",\"nch\",\"hoc\",\"iba\",\"gbm\",\"sun\",\"war\",\"mvv\",\"pap\",\"ary\",\"kxi\",\"csb\",\"pag\",\"cos\",\"rif\",\"kek\",\"krc\",\"aii\",\"ban\",\"ssw\",\"tvl\",\"mfe\",\"tah\",\"bvy\",\"bcl\",\"hnj\",\"nau\",\"nst\",\"afb\",\"quc\",\"min\",\"tmw\",\"mad\",\"bjn\",\"mai\",\"cjy\",\"got\",\"hsn\",\"gan\",\"tzl\",\"dws\",\"ldn\",\"afh\",\"sgs\",\"krl\",\"vep\",\"rue\",\"tly\",\"mic\",\"ext\",\"izh\",\"sma\",\"jam\",\"cmo\",\"mwl\",\"kpv\",\"koi\",\"bis\",\"ike\",\"run\",\"evn\",\"ryu\",\"mnc\",\"aoz\",\"otk\",\"kas\",\"aln\",\"akl\",\"yua\",\"shy\",\"fkv\",\"gos\",\"fij\",\"thv\",\"zgh\",\"gcf\",\"cay\",\"xmf\",\"tig\",\"div\",\"lij\",\"rap\",\"hrx\",\"cpi\",\"tts\",\"gaa\",\"tmr\",\"iii\",\"ltg\",\"bzt\",\"syc\",\"emx\",\"gom\",\"chg\",\"osp\",\"stq\",\"frr\",\"fro\",\"nys\",\"toi\",\"new\",\"phn\",\"jpa\",\"rel\",\"drt\",\"chn\",\"pli\",\"laa\",\"bal\",\"hdn\",\"hax\",\"mik\",\"ajp\",\"xqa\",\"pal\",\"crk\",\"mni\",\"lut\",\"ayl\",\"ood\",\"sdh\",\"ofs\",\"nus\",\"kiu\",\"diq\",\"qxq\",\"alt\",\"bfz\",\"klj\",\"mus\",\"srn\",\"guc\",\"lim\",\"zea\",\"shi\",\"mnr\",\"bom\",\"sat\",\"szl\"]\r\nfeatures = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})\r\nnum_labels = features['label'].num_classes\r\ndata_files = { \"train\": \"train.csv\", \"test\": \"test.csv\" }\r\nsentences = load_dataset(\"loretoparisi\/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n features = features\r\n```\r\n\r\nERROR:\r\n```\r\nClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None)\r\nValue(dtype='string', id=None)\r\nUsing custom data configuration loretoparisi--tatoeba-sentences-7b2c5e991f398f39\r\nDownloading and preparing dataset csv\/loretoparisi--tatoeba-sentences to \/root\/.cache\/huggingface\/datasets\/csv\/loretoparisi--tatoeba-sentences-7b2c5e991f398f39\/0.0.0\/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519...\r\nDownloading data files: 100%\r\n2\/2 [00:18<00:00, 8.06s\/it]\r\nDownloading data: 100%\r\n391M\/391M [00:13<00:00, 35.3MB\/s]\r\nDownloading data: 100%\r\n92.4M\/92.4M [00:02<00:00, 36.5MB\/s]\r\nFailed to read file '\/root\/.cache\/huggingface\/datasets\/downloads\/933132df9905194ea9faeb30cabca8c49318795612f6495fcb941a290191dd5d' with error : invalid literal for int() with base 10: 'cmn'\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pandas\/_libs\/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()\r\n\r\nTypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nValueError Traceback (most recent call last)\r\n15 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pandas\/_libs\/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()\r\n\r\nValueError: invalid literal for int() with base 10: 'cmn'\r\n```\r\n\r\nwhile loading without `features` it loads without errors\r\n\r\n```\r\nsentences = load_dataset(\"loretoparisi\/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text']\r\n )\r\n```\r\n\r\nbut the `label` col seems to be wrong (without the `ClassLabel` object):\r\n\r\n```\r\nsentences['train'].features\r\n{'label': Value(dtype='string', id=None),\r\n 'text': Value(dtype='string', id=None)}\r\n```\r\n\r\nThe dataset was https:\/\/huggingface.co\/datasets\/loretoparisi\/tatoeba-sentences\r\n\r\n\r\nDataset format is:\r\n\r\n```\r\nces\tNechci v\u011bd\u011bt, co je tam uvnit\u0159.\r\nces\tKdo o tom chce sly\u0161et?\r\ndeu\tTom sagte, er f\u00fchle sich nicht wohl.\r\nber\tMel-iyi-d anida-t tura ?\r\nhun\tGondom lesz r\u00e1 r\u00f6gt\u00f6n.\r\nber\tMel-iyi-d anida-tt tura ?\r\ndeu\tIch will dich nicht reden h\u00f6ren.\r\n```\r\n\r\n### Expected behavior\r\n\r\n```shell\r\ncorrectly load train and test files.\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4210\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4210\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4208","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4208\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4208\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4208\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4208","id":1213716426,"node_id":"PR_kwDODunzps42r7bW","number":4208,"title":"Add CMU MoCap Dataset","user":{"login":"dnaveenr","id":17746528,"node_id":"MDQ6VXNlcjE3NzQ2NTI4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17746528?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dnaveenr","html_url":"https:\/\/github.com\/dnaveenr","followers_url":"https:\/\/api.github.com\/users\/dnaveenr\/followers","following_url":"https:\/\/api.github.com\/users\/dnaveenr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dnaveenr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dnaveenr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dnaveenr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dnaveenr\/orgs","repos_url":"https:\/\/api.github.com\/users\/dnaveenr\/repos","events_url":"https:\/\/api.github.com\/users\/dnaveenr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dnaveenr\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4208). All of your documentation changes will be reflected on that endpoint.","- Updated the readme.\r\n- Added dummy_data.zip and ran the all the tests.\r\n\r\nThe dataset works for \"asf\/amc\" and \"avi\" formats which have a single download link for the complete dataset. But \"c3d\" and \"mpg\" have multiple download links, can we combine and host these links on the Hub since the dataset is free to use ?","\"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\nCan we combine and host these links on the Hub since the dataset is free to use ?","> \"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\n\r\nWe store downloaded data under `~\/.cache\/huggingface\/datasets\/downloads` (by default), so these downloads are \"hidden\" and won't clutter one's filesystem in an \"obvious way\".","> We store downloaded data under ~\/.cache\/huggingface\/datasets\/downloads (by default), so these downloads are \"hidden\" and won't clutter one's filesystem in an \"obvious way\".\r\n\r\nYes, the filesystem won't be clustered, but the problem is processing the dataset becomes cumbersome. For eg, for the c3d format has 5 part-downloads, so the folders will be as follows : \r\n```\r\n['~\/.cache\/huggingface\/datasets\/downloads\/extracted\/0e6bf028f490bf18c23ce572d1437c4ef32a74f630e33c26a806250d35cfcdd1', '~\/.cache\/huggingface\/datasets\/downloads\/extracted\/1b44fc5c7a6e031c904545422d449fd964f8ee795b9d1dcb0b6a76d03b50ebe6', '~\/.cache\/huggingface\/datasets\/downloads\/extracted\/137595188e96187c24ce1aa5c78200c7f78816fbd9d6c62354c01b3e6ec550c7', '~\/.cache\/huggingface\/datasets\/downloads\/extracted\/6c0c893e435f36fd79aa0f199f58fe16f01985f039644a7cb094a8c43a15ffd4', '~\/.cache\/huggingface\/datasets\/downloads\/extracted\/45e4703354cbc975e6add66f1b17b716c882b56f44575b033c5926aa5fcfb17f']\r\n```\r\nEach of these folders have a given set of subjects, so we'll be need to write extra code to fetch data from each of these folders, and the mpg format has 12 part-downloads which will lead to 12 folders having certain set of subjects, so it is cumbersome to process them.","I have added all the changes that were suggested. We just need to handle the multi-part download for c3d and mpg formats. Easiest way would be to have just one zip for these formats.","But we can handle this with a simple mapping that stores the id ranges (for each config), no? And an actual file path is not important during processing.","I have added code to handle c3d, mpg formats as well. The data for the mpg format seems incomplete as it contains only 53 rows. I have added a note regarding this in the Data Splits section.","The real data test works fine and dummy_data test work fine. There were few missing files which was causing issues, I have fixed it now.\r\n","- Reduced the dummy_data size.\r\n- Added sample dataset preprocessing code, it is not complete though.\r\n- Added all changes suggested.\r\n\r\nLet me know if anything else is required. Thank you. :)"],"created_at":1650821468000,"updated_at":1657120792000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Resolves #3457 \r\n\r\nDataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https:\/\/github.com\/huggingface\/datasets\/issues\/3457)\r\nThis PR adds the CMU MoCap Dataset.\r\n\r\nThe authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and description information. Some of the subjects do not have category\/subcategory\/description as well. I am using a subject to categories, subcategories and description map (metadata file).\r\n\r\nCurrently the loading of the dataset works for \"asf\/amc\" and \"avi\" formats since they have a single download link. But \"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\nAny suggestions\/inputs on this would be helpful. Thank you.\r\n\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4208\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4208\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4208","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4208","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4208.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4208.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4207","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4207\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4207\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4207\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4207","id":1213604615,"node_id":"PR_kwDODunzps42rmbK","number":4207,"title":"[Minor edit] Fix typo in class name","user":{"login":"cakiki","id":3664563,"node_id":"MDQ6VXNlcjM2NjQ1NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3664563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cakiki","html_url":"https:\/\/github.com\/cakiki","followers_url":"https:\/\/api.github.com\/users\/cakiki\/followers","following_url":"https:\/\/api.github.com\/users\/cakiki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cakiki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cakiki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cakiki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cakiki\/orgs","repos_url":"https:\/\/api.github.com\/users\/cakiki\/repos","events_url":"https:\/\/api.github.com\/users\/cakiki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cakiki\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1650793777000,"updated_at":1651756667000,"closed_at":1651756667000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Typo: `datasets.DatsetDict` -> `datasets.DatasetDict`","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4207\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4207\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4207","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4207","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4207.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4207.patch","merged_at":1651756667000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4206","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4206\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4206\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4206\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4206","id":1212715581,"node_id":"PR_kwDODunzps42pJQW","number":4206,"title":"Add Nerval Metric","user":{"login":"mdadda","id":49372461,"node_id":"MDQ6VXNlcjQ5MzcyNDYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49372461?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mdadda","html_url":"https:\/\/github.com\/mdadda","followers_url":"https:\/\/api.github.com\/users\/mdadda\/followers","following_url":"https:\/\/api.github.com\/users\/mdadda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mdadda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mdadda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mdadda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mdadda\/orgs","repos_url":"https:\/\/api.github.com\/users\/mdadda\/repos","events_url":"https:\/\/api.github.com\/users\/mdadda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mdadda\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1650656700000,"updated_at":1657120792000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"This PR adds readme.md and ner_val.py to metrics.\r\nNerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4206\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4206\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4206","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4206","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4206.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4206.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4205","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4205\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4205\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4205\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4205","id":1212466138,"node_id":"PR_kwDODunzps42oVFE","number":4205,"title":"Fix `convert_file_size_to_int` for kilobits and megabits","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650639381000,"updated_at":1651591722000,"closed_at":1651591308000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Minor change to fully align this function with the recent change in Transformers (https:\/\/github.com\/huggingface\/transformers\/pull\/16891) ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4205\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4205\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4205","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4205","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4205.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4205.patch","merged_at":1651591308000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4204","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4204\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4204\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4204\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4204","id":1212431764,"node_id":"PR_kwDODunzps42oN0j","number":4204,"title":"Add Recall Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","This looks good to me! "],"created_at":1650637466000,"updated_at":1651584203000,"closed_at":1651583784000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"What this PR mainly does:\r\n- add metric card for recall metric\r\n- update docs in recall python file\r\n\r\nNote: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4204\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4204\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4204","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4204","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4204.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4204.patch","merged_at":1651583784000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4203","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4203\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4203\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4203\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4203","id":1212431067,"node_id":"PR_kwDODunzps42oNrS","number":4203,"title":"Add Precision Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650637428000,"updated_at":1651587820000,"closed_at":1651587406000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"What this PR mainly does:\r\n- add metric card for precision metric\r\n- update docs in precision python file\r\n\r\nNote: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4203\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4203\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4203","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4203","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4203.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4203.patch","merged_at":1651587405000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4202","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4202\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4202\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4202\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4202","id":1212326288,"node_id":"PR_kwDODunzps42n278","number":4202,"title":"Fix some type annotation in doc","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650632011000,"updated_at":1650639780000,"closed_at":1650639403000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4202\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4202\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4202","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4202","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4202.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4202.patch","merged_at":1650639403000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4201","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4201\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4201\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4201\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4201","id":1212086420,"node_id":"PR_kwDODunzps42nIRm","number":4201,"title":"Update GH template for dataset viewer issues","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","You can see rendering at: https:\/\/github.com\/huggingface\/datasets\/blob\/6b48fedbdafe12a42c7b6edcecc32820af1a4822\/.github\/ISSUE_TEMPLATE\/dataset-viewer.yml"],"created_at":1650620084000,"updated_at":1651826323000,"closed_at":1650962755000,"author_association":"MEMBER","active_lock_reason":null,"body":"Update template to use new issue forms instead.\r\n\r\nWith this PR we can check if this new feature is useful for us.\r\n\r\nOnce validated, we can update the other templates.\r\n\r\nCC: @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4201\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4201\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4201","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4201","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4201.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4201.patch","merged_at":1650962755000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4200","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4200\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4200\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4200\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4200","id":1211980110,"node_id":"PR_kwDODunzps42mz0w","number":4200,"title":"Add to docs how to load from local script","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650614905000,"updated_at":1651826365000,"closed_at":1650692845000,"author_association":"MEMBER","active_lock_reason":null,"body":"This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it.\r\n\r\nRelated to #4192\r\n\r\nCC: @stevhliu ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4200\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4200\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4200","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4200","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4200.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4200.patch","merged_at":1650692844000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4199","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4199\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4199\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4199\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4199","id":1211953308,"node_id":"I_kwDODunzps5IPPCc","number":4199,"title":"Cache miss during reload for datasets using image fetch utilities through map ","user":{"login":"apsdehal","id":3616806,"node_id":"MDQ6VXNlcjM2MTY4MDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616806?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/apsdehal","html_url":"https:\/\/github.com\/apsdehal","followers_url":"https:\/\/api.github.com\/users\/apsdehal\/followers","following_url":"https:\/\/api.github.com\/users\/apsdehal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/apsdehal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/apsdehal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/apsdehal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/apsdehal\/orgs","repos_url":"https:\/\/api.github.com\/users\/apsdehal\/repos","events_url":"https:\/\/api.github.com\/users\/apsdehal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/apsdehal\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https:\/\/huggingface.co\/docs\/datasets\/about_cache","Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": get_datasets_user_agent()},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nwith \r\n```python\r\nUSER_AGENT = get_datasets_user_agent()\r\n\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": USER_AGENT},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nfixes the issue?","Thanks @mariosasko. That does fix the issue. In general, I think these image downloading utilities since they are being used by a lot of image dataset should be provided as a part of `datasets` library right to keep the logic consistent and READMEs smaller? If they already exists, that is also great, please point me to those. I saw that `http_get` does exist.","You can find my rationale (and a proposed solution) for why these utilities are not a part of `datasets` here: https:\/\/github.com\/huggingface\/datasets\/pull\/4100#issuecomment-1097994003.","Makes sense. But, I think as the number of image datasets as grow, more people are copying pasting original code from docs to work as it is while we make fixes to them later. I think we do need a central place for these to avoid that confusion as well as more easier access to image datasets. Should we restart that discussion, possible on slack?"],"created_at":1650613628000,"updated_at":1650992432000,"closed_at":1650980306000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nIt looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.\r\n\r\n## Steps to reproduce the bug\r\n\r\nUsing the example provided in `red_caps` dataset.\r\n```python\r\nfrom concurrent.futures import ThreadPoolExecutor\r\nfrom functools import partial\r\nimport io\r\nimport urllib\r\n\r\nimport PIL.Image\r\n\r\nimport datasets\r\nfrom datasets import load_dataset\r\nfrom datasets.utils.file_utils import get_datasets_user_agent\r\n\r\n\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": get_datasets_user_agent()},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n\r\n\r\ndef fetch_images(batch, num_threads, timeout=None, retries=0):\r\n fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)\r\n with ThreadPoolExecutor(max_workers=num_threads) as executor:\r\n batch[\"image\"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch[\"image_url\"]))\r\n return batch\r\n\r\n\r\ndef process_image_urls(batch):\r\n processed_batch_image_urls = []\r\n for image_url in batch[\"image_url\"]:\r\n processed_example_image_urls = []\r\n image_url_splits = re.findall(r\"http\\S+\", image_url)\r\n for image_url_split in image_url_splits:\r\n if \"imgur\" in image_url_split and \",\" in image_url_split:\r\n for image_url_part in image_url_split.split(\",\"):\r\n if not image_url_part:\r\n continue\r\n image_url_part = image_url_part.strip()\r\n root, ext = os.path.splitext(image_url_part)\r\n if not root.startswith(\"http\"):\r\n root = \"http:\/\/i.imgur.com\/\" + root\r\n root = root.split(\"#\")[0]\r\n if not ext:\r\n ext = \".jpg\"\r\n ext = re.split(r\"[?%]\", ext)[0]\r\n image_url_part = root + ext\r\n processed_example_image_urls.append(image_url_part)\r\n else:\r\n processed_example_image_urls.append(image_url_split)\r\n processed_batch_image_urls.append(processed_example_image_urls)\r\n batch[\"image_url\"] = processed_batch_image_urls\r\n return batch\r\n\r\n\r\ndset = load_dataset(\"red_caps\", \"jellyfish\")\r\ndset = dset.map(process_image_urls, batched=True, num_proc=4)\r\nfeatures = dset[\"train\"].features.copy()\r\nfeatures[\"image\"] = datasets.Sequence(datasets.Image())\r\nnum_threads = 5\r\ndset = dset.map(fetch_images, batched=True, batch_size=50, features=features, fn_kwargs={\"num_threads\": num_threads})\r\n```\r\n\r\nRun this in an interpretor or as a script twice and see that the cache is missed the second time.\r\n\r\n## Expected results\r\nAt reload there should not be any cache miss\r\n\r\n## Actual results\r\nEvery time script is run, cache is missed and dataset is built from scratch.\r\n\r\n## Environment info\r\n- `datasets` version: 2.1.1.dev0\r\n- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-glibc2.10\r\n- Python version: 3.8.13\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4199\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4199\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4198","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4198\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4198\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4198\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4198","id":1211456559,"node_id":"I_kwDODunzps5INVwv","number":4198,"title":"There is no dataset","user":{"login":"wilfoderek","id":1625647,"node_id":"MDQ6VXNlcjE2MjU2NDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1625647?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wilfoderek","html_url":"https:\/\/github.com\/wilfoderek","followers_url":"https:\/\/api.github.com\/users\/wilfoderek\/followers","following_url":"https:\/\/api.github.com\/users\/wilfoderek\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wilfoderek\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wilfoderek\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wilfoderek\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wilfoderek\/orgs","repos_url":"https:\/\/api.github.com\/users\/wilfoderek\/repos","events_url":"https:\/\/api.github.com\/users\/wilfoderek\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wilfoderek\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1650568766000,"updated_at":1651577345000,"closed_at":1650607945000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for '*name of the dataset*'\r\n\r\n**Link:** *link to the dataset viewer page*\r\n\r\n*short description of the issue*\r\n\r\nAm I the one who added this dataset ? Yes-No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4198\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4198\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4197","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4197\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4197\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4197\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4197","id":1211342558,"node_id":"PR_kwDODunzps42kyXD","number":4197,"title":"Add remove_columns=True","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Any reason why we can't just do `[inputs.copy()]` in this line for in-place operations to not have effects anymore:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/bf432011ff9155a5bc16c03956bc63e514baf80d\/src\/datasets\/arrow_dataset.py#L2232.\r\n\r\n(in the `batched` case, we can also copy the inputs' values (list objects) to ignore in-place modifications to the inputs' columns)\r\n\r\nI think `remove_columns=True` has no meaning, so I'm not a fan of this change.","@mariosasko copy does have a cost associated with it ... and plus you'll have to consider `deepcopy` Imagine columnds that are list of list of list of list .... Though I have to agree that `remove_columns=True` doesn't make sense (but, IMO, neither does it in its current use-case as it should refer to `input_columns`) ","Okay closing this PR for the following reasons:\r\n - `remove_columns=True` was expected to keep the `.update`-like operator for `.map`. I initially thought it would be a good way to ignore function side effects and only keep output of that function (cf. PR description).\r\n - expected `remove_columns=True` is a bad API according to @mariosasko and introduces unecessary changes for little gain (strictly equivalent to `remove_columns=dset.column_names`)"],"created_at":1650562093000,"updated_at":1650639101000,"closed_at":1650638730000,"author_association":"MEMBER","active_lock_reason":null,"body":"This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like:\r\n```\r\ndef apply(batch):\r\n batch_size = len(batch[\"id\"])\r\n batch[\"text\"] = [\"potato\" for _ range(batch_size)]\r\n return {}\r\n\r\n# Columns are: {\"id\": int}\r\ndset.map(apply, batched=True, remove_columns=\"text\") # crashes because `text` is not in the original columns\r\ndset.map(apply, batched=True) # mapped datasets has `text` column\r\n```\r\n\r\nIn this PR we suggest to have `remove_columns=True` so that we ignore the input completely, and just use the output to generate mapped dataset. This means that inplace operations won't have any effects anymore.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4197\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4197\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4197","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4197","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4197.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4197.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4196","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4196\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4196\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4196\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4196","id":1211271261,"node_id":"I_kwDODunzps5IMohd","number":4196,"title":"Embed image and audio files in `save_to_disk`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1650558318000,"updated_at":1650558318000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"Following https:\/\/github.com\/huggingface\/datasets\/pull\/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. \r\n\r\nAdding `embed_external_files` and set it to True by default to save_to_disk would be kind of a breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice:\r\n\r\n- the resulting dataset is self contained, in case you want to delete your cache for example or share it with someone else\r\n- users also upload these Arrow files to cloud storage via the fs parameter, and in this case they would expect to upload a self-contained dataset\r\n- consistency with push_to_hub\r\n\r\nThis can be implemented at the same time as sharding for `save_to_disk` for efficiency, and reuse the helpers from `push_to_hub` to embed the external files.\r\n\r\ncc @mariosasko ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4196\/reactions","total_count":5,"+1":5,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4196\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4194","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4194\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4194\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4194\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4194","id":1210958602,"node_id":"PR_kwDODunzps42jjD3","number":4194,"title":"Support lists of multi-dimensional numpy arrays","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650543746000,"updated_at":1652368594000,"closed_at":1652368120000,"author_association":"MEMBER","active_lock_reason":null,"body":"Fix #4191.\r\n\r\nCC: @SaulLu ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4194\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4194\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4194","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4194","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4194.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4194.patch","merged_at":1652368120000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4193","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4193\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4193\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4193\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4193","id":1210734701,"node_id":"PR_kwDODunzps42izQG","number":4193,"title":"Document save_to_disk and push_to_hub on images and audio files","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Good catch, I updated the docstrings"],"created_at":1650531876000,"updated_at":1650621355000,"closed_at":1650620971000,"author_association":"MEMBER","active_lock_reason":null,"body":"Following https:\/\/github.com\/huggingface\/datasets\/pull\/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4193\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4193\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4193","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4193","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4193.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4193.patch","merged_at":1650620971000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4192","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4192\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4192\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4192\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4192","id":1210692554,"node_id":"I_kwDODunzps5IKbPK","number":4192,"title":"load_dataset can't load local dataset,Unable to find ...","user":{"login":"ahf876828330","id":33253979,"node_id":"MDQ6VXNlcjMzMjUzOTc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33253979?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ahf876828330","html_url":"https:\/\/github.com\/ahf876828330","followers_url":"https:\/\/api.github.com\/users\/ahf876828330\/followers","following_url":"https:\/\/api.github.com\/users\/ahf876828330\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ahf876828330\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ahf876828330\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ahf876828330\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ahf876828330\/orgs","repos_url":"https:\/\/api.github.com\/users\/ahf876828330\/repos","events_url":"https:\/\/api.github.com\/users\/ahf876828330\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ahf876828330\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?","Hi @ahf876828330, \r\n\r\nAs @stevhliu pointed out, the proper way to load a dataset is not trying to load its metadata file.\r\n\r\nIn your case, as the dataset script is local, you should better point to your local loading script:\r\n```python\r\ndataset = load_dataset(\"dataset\/opus_books.py\")\r\n```\r\n\r\nPlease, feel free to re-open this issue if the previous code snippet does not work for you.","> Hi! :)\r\n> \r\n> I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?\r\n\r\nYes\uff0cyou are right!So if I have a metadata dataset local,How can I turn it to a dataset that can be used by the load_dataset() function\uff1fAre there some examples?","The metadata file isn't a dataset so you can't turn it into one. You should try @albertvillanova's code snippet above (now merged in the docs [here](https:\/\/huggingface.co\/docs\/datasets\/master\/en\/loading#local-loading-script)), which uses your local loading script `opus_books.py` to:\r\n\r\n1. Download the actual dataset. \r\n2. Once the dataset is downloaded, `load_dataset` will load it for you."],"created_at":1650529738000,"updated_at":1650905517000,"closed_at":1650613193000,"author_association":"NONE","active_lock_reason":null,"body":"\r\nTraceback (most recent call last):\r\n File \"\/home\/gs603\/ahf\/pretrained\/model.py\", line 48, in \r\n dataset = load_dataset(\"json\",data_files=\"dataset\/dataset_infos.json\")\r\n File \"\/home\/gs603\/miniconda3\/envs\/coderepair\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 1675, in load_dataset\r\n **config_kwargs,\r\n File \"\/home\/gs603\/miniconda3\/envs\/coderepair\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 1496, in load_dataset_builder\r\n data_files=data_files,\r\n File \"\/home\/gs603\/miniconda3\/envs\/coderepair\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 1155, in dataset_module_factory\r\n download_mode=download_mode,\r\n File \"\/home\/gs603\/miniconda3\/envs\/coderepair\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 800, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token)\r\n File \"\/home\/gs603\/miniconda3\/envs\/coderepair\/lib\/python3.7\/site-packages\/datasets\/data_files.py\", line 582, in from_local_or_remote\r\n if not isinstance(patterns_for_key, DataFilesList)\r\n File \"\/home\/gs603\/miniconda3\/envs\/coderepair\/lib\/python3.7\/site-packages\/datasets\/data_files.py\", line 544, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"\/home\/gs603\/miniconda3\/envs\/coderepair\/lib\/python3.7\/site-packages\/datasets\/data_files.py\", line 194, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"\/home\/gs603\/miniconda3\/envs\/coderepair\/lib\/python3.7\/site-packages\/datasets\/data_files.py\", line 144, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '\/home\/gs603\/ahf\/pretrained\/dataset\/dataset_infos.json' at \/home\/gs603\/ahf\/pretrained\r\n\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/33253979\/164413285-84ea65ac-9126-408f-9cd2-ce4751a5dd73.png)\r\n![image](https:\/\/user-images.githubusercontent.com\/33253979\/164413338-4735142f-408b-41d9-ab87-8484de2be54f.png)\r\n\r\nthe code is in the model.py,why I can't use the load_dataset function to load my local dataset?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4192\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4192\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4191","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4191\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4191\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4191\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4191","id":1210028090,"node_id":"I_kwDODunzps5IH5A6","number":4191,"title":"feat: create an `Array3D` column from a list of arrays of dimension 2","user":{"login":"SaulLu","id":55560583,"node_id":"MDQ6VXNlcjU1NTYwNTgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55560583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SaulLu","html_url":"https:\/\/github.com\/SaulLu","followers_url":"https:\/\/api.github.com\/users\/SaulLu\/followers","following_url":"https:\/\/api.github.com\/users\/SaulLu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SaulLu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SaulLu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SaulLu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SaulLu\/orgs","repos_url":"https:\/\/api.github.com\/users\/SaulLu\/repos","events_url":"https:\/\/api.github.com\/users\/SaulLu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SaulLu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `data_map` are arrays of dimension 2\r\n - the outer list in `prepare_dataset_2D` should not be taken into account in the dimension counting, as it is used because in `map` you pass `batched=True`\r\n\r\nNote that for the 3D alternatives you mention:\r\n- In `prepare_dataset_3D_ter`, you create an `Array3D` from arrays of dimension 3:\r\n - the array `data_map[index][np.newaxis, :, :]` has dimension 3\r\n - the outer list in `prepare_dataset_3D_ter` is the one used by `batched=True`\r\n- In `prepare_dataset_3D_bis`, you create an `Array3D` from a list of list of lists:\r\n - the value of `data_map[index].tolist()` is a list of lists\r\n - it is enclosed by another list `[data_map[index].tolist()]`, thus giving a list of list of lists\r\n - the outer list is the one used by `batched=True`\r\n\r\nTherefore, if I understand correctly, your request would be to be able to create an `Array3D` from a list of an array of dimension 2:\r\n- In `prepare_dataset_3D`, `data_map[index]` is an array of dimension 2\r\n- it is enclosed by a list `[data_map[index]]`, thus giving a list of an array of dimension 2\r\n- the outer list is the one used by `batched=True`\r\n\r\nPlease, feel free to tell me if I did not understand you correctly.","Hi @albertvillanova ,\r\n\r\nIndeed my message was confusing and you guessed right :smile: : I think would be interesting to be able to create an Array3D from a list of an array of dimension 2. \r\n\r\nFor the 2D case I should have given as a \"similar\" example:\r\n```python\r\n\r\ndata_map_1D = {\r\n 1: np.array([0.2, 0.4]),\r\n 2: np.array([0.1, 0.4]),\r\n}\r\n\r\ndef prepare_dataset_2D(batch):\r\n batch[\"pixel_values\"] = [[data_map_1D[index]] for index in batch[\"id\"]]\r\n return batch\r\n \r\nds_2D = ds.map(\r\n prepare_dataset_2D, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array2D(shape=(1, 2), dtype=\"float32\")})\r\n)\r\n```"],"created_at":1650477872000,"updated_at":1652368120000,"closed_at":1652368120000,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nIt is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1.\r\n\r\nTo illustrate my proposal, let's take the following toy dataset t:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, features\r\n\r\ndata_map = {\r\n 1: np.array([[0.2, 0,4],[0.19, 0,3]]),\r\n 2: np.array([[0.1, 0,4],[0.19, 0,3]]),\r\n}\r\n\r\ndef create_toy_ds():\r\n my_dict = {\"id\":[1, 2]}\r\n return Dataset.from_dict(my_dict)\r\n\r\nds = create_toy_ds()\r\n```\r\n\r\nThe following 2D processing works without any errors raised:\r\n```python\r\ndef prepare_dataset_2D(batch):\r\n batch[\"pixel_values\"] = [data_map[index] for index in batch[\"id\"]]\r\n return batch\r\n \r\nds_2D = ds.map(\r\n prepare_dataset_2D, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array2D(shape=(2, 3), dtype=\"float32\")})\r\n)\r\n```\r\n\r\nThe following 3D processing doesn't work:\r\n```python\r\ndef prepare_dataset_3D(batch):\r\n batch[\"pixel_values\"] = [[data_map[index]] for index in batch[\"id\"]]\r\n return batch\r\n \r\nds_3D = ds.map(\r\n prepare_dataset_3D, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array3D(shape=(1, 2, 3, dtype=\"float32\")})\r\n)\r\n```\r\nThe error raised is:\r\n```\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n[](https:\/\/localhost:8080\/#) in ()\r\n 3 batched=True,\r\n 4 remove_columns=ds.column_names,\r\n----> 5 features=features.Features({\"pixel_values\": features.Array3D(shape=(1, 2, 3), dtype=\"float32\")})\r\n 6 )\r\n\r\n12 frames\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py](https:\/\/localhost:8080\/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 1971 new_fingerprint=new_fingerprint,\r\n 1972 disable_tqdm=disable_tqdm,\r\n-> 1973 desc=desc,\r\n 1974 )\r\n 1975 else:\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py](https:\/\/localhost:8080\/#) in wrapper(*args, **kwargs)\r\n 518 self: \"Dataset\" = kwargs.pop(\"self\")\r\n 519 # apply actual function\r\n--> 520 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 521 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 522 for dataset in datasets:\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py](https:\/\/localhost:8080\/#) in wrapper(*args, **kwargs)\r\n 485 }\r\n 486 # apply actual function\r\n--> 487 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 488 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 489 # re-apply format to the output\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/fingerprint.py](https:\/\/localhost:8080\/#) in wrapper(*args, **kwargs)\r\n 456 # Call actual function\r\n 457 \r\n--> 458 out = func(self, *args, **kwargs)\r\n 459 \r\n 460 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py](https:\/\/localhost:8080\/#) in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)\r\n 2354 writer.write_table(batch)\r\n 2355 else:\r\n-> 2356 writer.write_batch(batch)\r\n 2357 if update_data and writer is not None:\r\n 2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py](https:\/\/localhost:8080\/#) in write_batch(self, batch_examples, writer_batch_size)\r\n 505 col_try_type = try_features[col] if try_features is not None and col in try_features else None\r\n 506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)\r\n--> 507 arrays.append(pa.array(typed_sequence))\r\n 508 inferred_features[col] = typed_sequence.get_inferred_type()\r\n 509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n[\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py](https:\/\/localhost:8080\/#) in __arrow_array__(self, type)\r\n 175 storage = list_of_np_array_to_pyarrow_listarray(data, type=pa_type.value_type)\r\n 176 else:\r\n--> 177 storage = pa.array(data, pa_type.storage_dtype)\r\n 178 return pa.ExtensionArray.from_storage(pa_type, storage)\r\n 179 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Can only convert 1-dimensional array values\r\n```\r\n\r\n**Describe the solution you'd like**\r\nNo error in the second scenario and an identical result to the following snippets.\r\n\r\n**Describe alternatives you've considered**\r\nThere are other alternatives that work such as:\r\n```python\r\n\r\ndef prepare_dataset_3D_bis(batch):\r\n batch[\"pixel_values\"] = [[data_map[index].tolist()] for index in batch[\"id\"]]\r\n return batch\r\n\r\nds_3D_bis = ds.map(\r\n prepare_dataset_3D_bis, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array3D(shape=(1, 2, 3), dtype=\"float32\")})\r\n)\r\n```\r\nor\r\n```python\r\ndef prepare_dataset_3D_ter(batch):\r\n batch[\"pixel_values\"] = [data_map[index][np.newaxis, :, :] for index in batch[\"id\"]]\r\n return batch\r\n\r\nds_3D_ter = ds.map(\r\n prepare_dataset_3D_ter, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array3D(shape=(1, 2, 3), dtype=\"float32\")})\r\n)\r\n```\r\nBut both solutions require the user to be aware that `data_map[index]` is an `np.array` type.\r\n\r\ncc @lhoestq as we discuss this offline :smile: ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4191\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4191\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4190","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4190\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4190\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4190\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4190","id":1209901677,"node_id":"PR_kwDODunzps42gK3y","number":4190,"title":"Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650470881000,"updated_at":1650635905000,"closed_at":1650635520000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds a `max_shard_size` param to `push_to_hub` and deprecates `shard_size` in favor of this new param to have a more descriptive name (a shard has at most the `shard_size` bytes in `push_to_hub`) for the param and to align the API with [Transformers](https:\/\/github.com\/huggingface\/transformers\/blob\/ff06b177917384137af2d9585697d2d76c40cdfc\/src\/transformers\/modeling_utils.py#L1350).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4190\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4190\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4190","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4190","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4190.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4190.patch","merged_at":1650635520000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4189","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4189\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4189\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4189\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4189","id":1209881351,"node_id":"PR_kwDODunzps42gGv5","number":4189,"title":"Document how to use FAISS index for special operations","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650469916000,"updated_at":1651826590000,"closed_at":1651826152000,"author_association":"MEMBER","active_lock_reason":null,"body":"Document how to use FAISS index for special operations, by accessing the index itself.\r\n\r\nClose #4029.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4189\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4189\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4189","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4189","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4189.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4189.patch","merged_at":1651826152000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4188","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4188\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4188\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4188\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4188","id":1209740957,"node_id":"PR_kwDODunzps42fpMv","number":4188,"title":"Support streaming cnn_dailymail dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Did you run the `datasets-cli` command before merging to make sure you generate all the examples ?"],"created_at":1650463476000,"updated_at":1652276346000,"closed_at":1650469969000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming cnn_dailymail dataset.\r\n\r\nFix #3969.\r\n\r\nCC: @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4188\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4188\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4188","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4188","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4188.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4188.patch","merged_at":1650469969000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4187","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4187\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4187\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4187\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4187","id":1209721532,"node_id":"PR_kwDODunzps42flGp","number":4187,"title":"Don't duplicate data when encoding audio or image","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I'm not familiar with the concept of streaming vs non-streaming in HF datasets. I just wonder that you have the distinction here. Why doesn't it work to always make use of `bytes`? \"using a local file - which is often required for audio\" - why would that be?\r\n\r\nThe `path` would always point to some location in the `cache_dir`? I think this can be problematic. I would have expected that after I did `dataset.save_to_disk(...)` that I can remove the cache dir. But maybe just because I'm not familiar with HF. Or maybe the docs can be improved to clarify this.\r\n","We could always load every data file into `bytes` and save it this way the audio as bytes in `arrow` format, but the problem then would be that it makes the `file` column useless, *i.e.* people cannot inspect the audio file locally anymore or else they would need to first save bytes as a file which is not evident. This either breaks backwards compatibility or forces the user to stored 2x the required size locally. There was a longer discussion here: https:\/\/github.com\/huggingface\/datasets\/issues\/3663\r\n\r\nIt's a good argument though that `dataset.save_to_disk(...)` should save everything that is needed to the disk and should be independent of other folders, but I do think the arguments of #3663 to not break backwards compatibility and to allow people to inspect the downloaded audio files locally are a bit more important here. \r\n\r\nBut maybe, we could add a flag, `save_files_as_bytes` or `make_independent`, `make_self_contained` or a better name to `save_to_disk(...)` and `push_to_hub(...)` that would allow to make the resulting folder completely independent. ","What do you think @mariosasko @lhoestq @polinaeterna @anton-l ?\r\n","For context: you can either store the path to local images or audio files, or the bytes of those files.\r\n\r\nIf your images and audio files are local files, then the arrow file from `save_to_disk` will store paths to these files.\r\nIf you want to include the bytes or your images or audio files instead, you must `read()` those files first.\r\nThis can be done by storing the \"bytes\" instead of the \"path\" of the images or audio files.\r\n\r\nOn the other hand, the resulting Parquet files from `push_to_hub` are self-contained, so that anyone can reload the dataset from the Hub. If your dataset contains image or audio data, the Parquet files will store the bytes of your images or audio files.\r\n\r\nFor now I just updated the documentation: https:\/\/github.com\/huggingface\/datasets\/pull\/4193. Maybe we can also embed the image and audio bytes in `save_to_disk` when we implement sharding, so that is can be done as efficiently as `push_to_hub`.\r\n\r\nAnyway, merging this one :)"],"created_at":1650462637000,"updated_at":1650532620000,"closed_at":1650532247000,"author_association":"MEMBER","active_lock_reason":null,"body":"Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`.\r\n\r\nThis PR discards the `bytes` when the audio or image file exists locally.\r\n\r\nIn particular it's common for audio datasets builders to provide both the bytes and the local path in order to work for both streaming (using the bytes) and non-streaming mode (using a local file - which is often required for audio).\r\n\r\ncc @patrickvonplaten ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4187\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4187\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4187","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4187","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4187.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4187.patch","merged_at":1650532247000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4186","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4186\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4186\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4186\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4186","id":1209463599,"node_id":"PR_kwDODunzps42evF5","number":4186,"title":"Fix outdated docstring about default dataset config","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650449091000,"updated_at":1650632084000,"closed_at":1650631711000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4186\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4186\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4186","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4186","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4186.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4186.patch","merged_at":1650631711000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4185","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4185\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4185\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4185\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4185","id":1209429743,"node_id":"I_kwDODunzps5IFm7v","number":4185,"title":"Librispeech documentation, clarification on format","user":{"login":"albertz","id":59132,"node_id":"MDQ6VXNlcjU5MTMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59132?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertz","html_url":"https:\/\/github.com\/albertz","followers_url":"https:\/\/api.github.com\/users\/albertz\/followers","following_url":"https:\/\/api.github.com\/users\/albertz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertz\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertz\/repos","events_url":"https:\/\/api.github.com\/users\/albertz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertz\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["(@patrickvonplaten )","Also cc @lhoestq here","The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https:\/\/github.com\/huggingface\/datasets\/pull\/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is done on the fly, which is also why one should **not** do `ds[\"audio\"][\"array\"][0]` as this will decode all dataset samples, but instead `ds[0][\"audio\"][\"array\"]` see: https:\/\/huggingface.co\/docs\/datasets\/audio_process#audio-datasets\r\n\r\n","So, again to clarify: On disk, only the raw flac file content is stored? Is this also the case after `save_to_disk`?\r\n\r\nAnd is it simple to also store it re-encoded as ogg or mp3 instead?\r\n","Hey, \r\n\r\nSorry yeah I was just about to look into this! We actually had an outdated version of Librispeech ASR that didn't save any files, but instead converted the audio files to a byte string, then was then decoded on-the-fly. This however is not very user-friendly so we recently decided to instead show the full path of the audio files with the `path` parameter.\r\n\r\nI'm currently changing this for Librispeech here: https:\/\/github.com\/huggingface\/datasets\/pull\/4184 .\r\nYou should be able to see the audio file in the original `flac` format under `path` then. I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ? ","> I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ?\r\n\r\nSure, I would expect that `load_dataset(\"librispeech_asr\")` would give you the original (not re-encoded) data (flac or already decoded). So such re-encoding logic would be some separate generic function. So I could do sth like `dataset.reencode_as_ogg(**ogg_encode_opts).save_to_disk(...)` or so.\r\n","A follow-up question: I wonder whether a Parquet dataset is maybe more what we actually want to have? (Following also my comment here: https:\/\/github.com\/huggingface\/datasets\/pull\/4184#issuecomment-1105045491.) Because I think we actually would prefer to embed the data content in the dataset.\r\n\r\nSo, instead of `save_to_disk`\/`load_from_disk`, we would use `to_parquet`,`from_parquet`? Is there any downside? Are arrow files more efficient?\r\n\r\nRelated is also the doc update in #4193.\r\n","`save_to_disk` saves the dataset as an Arrow file, which is the format we use to load a dataset using memory mapping. This way the dataset does not fill your RAM, but is read from your disk instead.\r\n\r\nTherefore you can directly reload a dataset saved with `save_to_disk` using `load_from_disk`.\r\n\r\nParquet files are used for cold storage: to use memory mapping on a Parquet dataset, you first have to convert it to Arrow. We use Parquet to reduce the I\/O when pushing\/downloading data from the Hugging face Hub. When you load a Parquet file from the Hub, it is converted to Arrow on the fly during the download."],"created_at":1650447355000,"updated_at":1650538853000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"https:\/\/github.com\/huggingface\/datasets\/blob\/cd3ce34ab1604118351e1978d26402de57188901\/datasets\/librispeech_asr\/librispeech_asr.py#L53\r\n\r\n> Note that in order to limit the required storage for preparing this dataset, the audio\r\n> is stored in the .flac format and is not converted to a float32 array. To convert, the audio\r\n> file to a float32 array, please make use of the `.map()` function as follows:\r\n> \r\n> ```python\r\n> import soundfile as sf\r\n> def map_to_array(batch):\r\n> speech_array, _ = sf.read(batch[\"file\"])\r\n> batch[\"speech\"] = speech_array\r\n> return batch\r\n> dataset = dataset.map(map_to_array, remove_columns=[\"file\"])\r\n> ```\r\n\r\nIs this still true?\r\n\r\nIn my case, `ds[\"train.100\"]` returns:\r\n```\r\nDataset({\r\n features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'],\r\n num_rows: 28539\r\n})\r\n```\r\nand taking the first instance yields:\r\n```\r\n{'file': '374-180298-0000.flac',\r\n 'audio': {'path': '374-180298-0000.flac',\r\n 'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,\r\n -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),\r\n 'sampling_rate': 16000},\r\n 'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED',\r\n 'speaker_id': 374,\r\n 'chapter_id': 180298,\r\n 'id': '374-180298-0000'}\r\n```\r\n\r\nThe `audio` `array` seems to be already decoded. So such convert\/decode code as mentioned in the doc is wrong?\r\n\r\nBut I wonder, is it actually stored as flac on disk, and the decoding is done on-the-fly? Or was it decoded already during the preparation and is stored as raw samples on disk?\r\n\r\nNote that I also used `datasets.load_dataset(\"librispeech_asr\", \"clean\").save_to_disk(...)` and then `datasets.load_from_disk(...)` in this example. Does this change anything on how it is stored on disk?\r\n\r\nA small related question: Actually I would prefer to even store it as mp3 or ogg on disk. Is this easy to convert?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4185\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4185\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4184","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4184\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4184\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4184\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4184","id":1208592669,"node_id":"PR_kwDODunzps42cB2j","number":4184,"title":"[Librispeech] Add 'all' config","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Fix https:\/\/github.com\/huggingface\/datasets\/issues\/4179","_The documentation is not available anymore as the PR was closed or merged._","Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n\r\nAnd to get the subsets, I do sth like:\r\n```python\r\nds = load_dataset(\"librispeech_asr\")\r\ntrain_ds = ds[\"train\"]\r\ndev_clean_ds = ds[\"dev-clean\"]\r\ndev_other_ds = ds[\"dev-other\"]\r\ntest_clean_ds = ds[\"test-clean\"]\r\ntest_other_ds = ds[\"test-other\"]\r\n```\r\n?\r\n","> Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n> \r\n> And to get the subsets, I do sth like:\r\n> \r\n> ```python\r\n> ds = load_dataset(\"librispeech_asr\")\r\n> train_ds = ds[\"train\"]\r\n> dev_clean_ds = ds[\"dev-clean\"]\r\n> dev_other_ds = ds[\"dev-other\"]\r\n> test_clean_ds = ds[\"test-clean\"]\r\n> test_other_ds = ds[\"test-other\"]\r\n> ```\r\n> \r\n> ?\r\n\r\nYou could do:\r\n\r\n\r\n```python\r\nds = load_dataset(\"librispeech_asr\", \"all\") # <- note that we have to pass a config\r\ntrain_ds = ds[\"train\"]\r\ndev_clean_ds = ds[\"dev-clean\"]\r\ndev_other_ds = ds[\"dev-other\"]\r\ntest_clean_ds = ds[\"test-clean\"]\r\ntest_other_ds = ds[\"test-other\"]\r\n```","So, `load_dataset(\"librispeech_asr\")` is not possible, it must be `load_dataset(\"librispeech_asr\", \"all\")`?\r\n\r\nWhy is that?\r\n\r\nThe docs say:\r\n```\r\nname: `str` name, optional configuration for the dataset that affects the data generated on disk. Different\r\n `builder_config`s will have their own subdirectories and versions.\r\n If not provided, uses the first configuration in self.BUILDER_CONFIGS\r\n```\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/cd3ce34ab1604118351e1978d26402de57188901\/src\/datasets\/builder.py#L228\r\n\r\nOr maybe you could just define `DEFAULT_CONFIG_NAME`?\r\n","> If not provided, uses the first configuration in self.BUILDER_CONFIGS\r\n\r\nOh crap this is outdated documentation. No it doesn't take the first config by default.\r\n\r\nEDIT: opened a PR to fix this: https:\/\/github.com\/huggingface\/datasets\/pull\/4186","> No it doesn't take the first config by default.\r\n\r\nBut defining `DEFAULT_CONFIG_NAME` would work?\r\n\r\nSo should we define `DEFAULT_CONFIG_NAME = \"all\"` here as well? I think this is a reasonable default config.\r\n\r\nDon't most datasets have some default config?\r\n","> But defining DEFAULT_CONFIG_NAME would work?\r\n>\r\n> So should we define DEFAULT_CONFIG_NAME = \"all\" here as well? I think this is a reasonable default config.\r\n\r\nYes that would work, and I also find it reasonable to do it :)\r\n\r\n> Don't most datasets have some default config?\r\n\r\nMost datasets only have one configuration, so the single configuration is the default one. Then other datasets gave several configurations, and whether they have a default one is decided case-by-case.\r\n\r\ne.g. `glue` is a benchmark and doesn't have a default task, one must choose which task of `glue` they want to use explicitely.","Thanks a lot for the feedback! \r\n\r\nUsing `\"all\"` now as the default config. I changed the layout a bit so that there is not a single \"train\", but instead we have multiple \"train.clean.100\", \"train.clean.360\", \"train.other.500\". This way we don't even need to do filtering and it's also cleaner IMO.\r\n\r\n@albertz - you should now be able to do the following:\r\n\r\n```python\r\nload_dataset(\"librispeech_asr\") # <- run this once to download, prepare dataset and cache everything\r\n\r\n# The following operations will be very fast since all the downloading and processing is already cached\r\ntrain_1 = load_dataset(\"librispeech_asr\", split=\"train.clean.100\")\r\nprint(train_1)\r\ntrain_2 = load_dataset(\"librispeech_asr\", split=\"train.clean.100+train.clean.360\")\r\nprint(train_2)\r\ntrain_full = load_dataset(\"librispeech_asr\", split=\"train.clean.100+train.clean.360+train.other.500\")\r\nprint(train_full)\r\ndev_clean_ds = load_dataset(\"librispeech_asr\", split=\"validation.clean\")\r\nprint(dev_clean_ds)\r\ndev_other_ds = load_dataset(\"librispeech_asr\", split=\"validation.other\")\r\nprint(dev_other_ds)\r\ntest_clean_ds = load_dataset(\"librispeech_asr\", split=\"test.clean\")\r\nprint(test_clean_ds)\r\ntest_other_ds = load_dataset(\"librispeech_asr\", split=\"test.other\")\r\nprint(test_other_ds)\r\n```\r\n\r\n\r\n","Think this way we have the best of both worlds. Also @lhoestq, I think we could highlight better in the docs that it's possible to combine different splits. We do this actually quite a lot for speech. For Common Voice many people include \"validation\" in the training if the data is too small, e.g.: https:\/\/github.com\/huggingface\/transformers\/blob\/ff06b177917384137af2d9585697d2d76c40cdfc\/examples\/pytorch\/speech-recognition\/run_speech_recognition_ctc.py#L147\r\n\r\nShould we maybe add a short section to the loading tutorial here: https:\/\/huggingface.co\/docs\/datasets\/v2.1.0\/en\/loading#hugging-face-hub ? (Happy to do it)","Is there any advantage or difference in calling `load_dataset` multiple times for each split? Or why not just call `load_dataset` once and then access each split?\r\n\r\nNote in our case, we cannot really use the caching mechanism because we have a recipe pipeline used by multiple users (and I think a common cache dir for all users might end up in problems) and we basically would use `load_dataset(\"librispeech_asr\").save_to_disk(...)` and then later `load_from_disk(...)`. (See here: https:\/\/github.com\/rwth-i6\/i6_core\/pull\/253)\r\n\r\nSo with `load_from_disk`, we cannot really provide the split this way, so we anyway would do sth like:\r\n```python\r\nds = datasets.load_from_disk(...)\r\ntrain = ds[\"train\"]\r\n```\r\nOr with your latest proposal, it would look like:\r\n```python\r\nds = datasets.load_from_disk(...)\r\ntrain_ds = datasets.concatenate_datasets(\r\n [ds[\"train.clean.100\"], ds[\"train.clean.360\"], ds[\"train.other.500\"]])\r\n```\r\nright?\r\n","> Is there any advantage or difference in calling `load_dataset` multiple times for each split? Or why not just call `load_dataset` once and then access each split?\r\n> \r\n> Note in our case, we cannot really use the caching mechanism because we have a recipe pipeline used by multiple users (and I think a common cache dir for all users might end up in problems) and we basically would use `load_dataset(\"librispeech_asr\").save_to_disk(...)` and then later `load_from_disk(...)`. (See here: [rwth-i6\/i6_core#253](https:\/\/github.com\/rwth-i6\/i6_core\/pull\/253))\r\n> \r\n> So with `load_from_disk`, we cannot really provide the split this way, so we anyway would do sth like:\r\n> \r\n> ```python\r\n> ds = datasets.load_from_disk(...)\r\n> train = ds[\"train\"]\r\n> ```\r\n> \r\n> Or with your latest proposal, it would look like:\r\n> \r\n> ```python\r\n> ds = datasets.load_from_disk(...)\r\n> train_ds = datasets.concatenate_datasets(\r\n> [ds[\"train.clean.100\"], ds[\"train.clean.360\"], ds[\"train.other.500\"]])\r\n> ```\r\n> \r\n> right?\r\n\r\nI see the use case! The only advantage by calling `datasets` multiple times is that one can easily \"merge\" splits with `\"+\"`, but yeah you can do the exact same with `concatenate`.\r\n\r\n@lhoestq what do you think is the best approach with `load_from_disk`? \r\n\r\n@albertz, you could also define the `cache_dir` when doing `load_dataset(...)` which will then put all the relevant `arrow` files int the cache dir that you defined, e.g.:\r\n\r\n```python\r\nload_dataset(\"librispeech_asr\", cache_dir=\"\/easy\/to\/access\/directory\")\r\n```","@albertz, I took a read through https:\/\/github.com\/rwth-i6\/i6_core\/pull\/253 . \r\n\r\nI think the best would be the following:\r\n\r\n1. Do `ds = load_dataset(..., cache_dir=\"\/dir\/that\/is\/easy\/to\/access\")` <- having merged this PR, this will save all the original `.flac` files in the `cache_dir`\r\n2. Do `ds.save_to_disk(\"local\/path\")` this should then only save the `arrow.format` with a `path` string to the audio files which are located in `cache_dir` <- this won't require a lot of memory after https:\/\/github.com\/huggingface\/datasets\/pull\/4184#discussion_r854132740 is fixed and can be done for each person individually.\r\n3. `ds = datasets.load_from_disk(\"local\/path\")` can the be used. An object of `ds` will then have a `path` variable that links to the original audio files in the `cache_dir`. You can change these audio files then easily to `.mp3. You could do this with the `.map(...)` function, e.g. define a function that maps through all audio files, load them and then save them on disk afterward.","@lhoestq - I think this one is good to go","> @albertz, I took a read through [rwth-i6\/i6_core#253](https:\/\/github.com\/rwth-i6\/i6_core\/pull\/253) .\r\n> \r\n> I think the best would be the following:\r\n> \r\n> 1. Do `ds = load_dataset(..., cache_dir=\"\/dir\/that\/is\/easy\/to\/access\")` <- having merged this PR, this will save all the original `.flac` files in the `cache_dir`\r\n> 2. Do `ds.save_to_disk(\"local\/path\")` this should then only save the `arrow.format` with a `path` string to the audio files which are located in `cache_dir` <- this won't require a lot of memory after [[Librispeech] Add 'all' config\u00a0#4184 (comment)](https:\/\/github.com\/huggingface\/datasets\/pull\/4184#discussion_r854132740) is fixed and can be done for each person individually.\r\n> 3. `ds = datasets.load_from_disk(\"local\/path\")` can the be used. An object of `ds` will then have a `path` variable that links to the original audio files in the `cache_dir`. You can change these audio files then easily to `.mp3. You could do this with the `.map(...)` function, e.g. define a function that maps through all audio files, load them and then save them on disk afterward.\r\n\r\nOh, so you say that our current implementation in https:\/\/github.com\/rwth-i6\/i6_core\/pull\/253 is broken? Because our cache dir is just some temp directory which will be removed afterwards, and we just store what we get out of `save_to_disk`. I think it would be good to clarify that in the doc of `save_to_disk`, that this is not enough and can depend on files from the cache dir. (@dthulke)\r\n\r\nSo, you say we anyway need to share the cache dir among users? But we would want to make sure that after the initial download and preparation of the data, this is set to readonly, because we want to make sure that other people will not modify the data in any way. Right?\r\n\r\nBut then, we don't really need the `save_to_disk` and `load_from_disk` at all, right?\r\n","@albertz \r\n\r\n> Oh, so you say that our current implementation in https:\/\/github.com\/rwth-i6\/i6_core\/pull\/253 is broken? Because our cache dir is just some temp directory which will be removed afterwards, and we just store what we get out of save_to_disk. I think it would be good to clarify that in the doc of save_to_disk, that this is not enough and can depend on files from the cache dir. (@dthulke)\r\n\r\nOh, I wasn't aware that audio files are handled this way. Then we should have the cache directory as an additional job output, so that we keep the audio files. \r\n\r\n> So, you say we anyway need to share the cache dir among users?\r\n\r\nNo, the cache dir can still be a directory in the job output folder. Then the audio paths in the corresponding dataset column correspond to the flac files in that directory. This way the \"output\" of the job is contained into the job directory and we don't write files to a global cache directory that is independent of the sisyphus graph.\r\n\r\nIf we want to share the audio data between different users, we can just link to a central instance of the job (similar to how we do it with the `DownloadLibriSpeechCorpusJob`).","@dthulke - that's a good point actually! So you can do both things:\r\n\r\n1. Convert all audio files to bytes. Bytes can be saved by `arrow` so in this case you can do `save_to_disk(...)`, but then you cannot really inspect the audio files locally as they'll just be saved within a large arrow file (this actually used to be the default case but we're changing this now). The problem of this is summarized here a bit: https:\/\/github.com\/huggingface\/datasets\/issues\/3663 . You can still do this if you'd like, e.g. you could do:\r\n\r\n```python\r\nds = load_dataset(\"librispeech_asr\")\r\n\r\ndef read_file(batch):\r\n with open(batch[\"file\"], \"r\") as f:\r\n batch[\"bytes\"] = f.read() \r\n return batch\r\n\r\nds = ds.map(read_file)\r\nds.save_to_disk(\"\/path\") <- the saved arrow object will now contain everything you need\r\n```\r\n\r\nhowever this is not recommend - it's should be much easier to just save the path to the downloaded audio files.\r\n\r\n2. Not convert audio files to bytes, but just leave them in their original file format. Then only the path to the original files will be save in arrow. This will be the default case. This means that when you do `load_dataset(...)` both the orginal audio data and the arrow file will be saved in the `cache_dir` (which can be saved locally for every user or in a shared cache - we actually use a shared cache quite a bit at Hugging Face). When do you do `save_to_disk(...)` now only the `path` will be saved in `arrow` format (after this PR is merged, you'll see that the `arrow files should be very light weight` meaning that `save_to_disk(...)` can be done for every user, but has a dependency on the `cache_dir` (because the audio files live there).\r\n\r\n=> Now what you could do as well would be to simply move all the audio files to the folder you want (the `save_to_disk(...)` folder) and then change the path of every sample to this folder (maybe with `map(...)`) and then this folder would be self contained. I do however think it's better to just specific a `cache_dir` and re-use `load_dataset(...)` every time instead of `load_from_disk` or `save_to_disk(...)`. Note that you can even pass the relevant cache files to `load_dataset(...)` here: https:\/\/huggingface.co\/docs\/datasets\/v2.1.0\/en\/package_reference\/loading_methods#datasets.load_dataset.data_files in which case you can be 100% sure that nothing is redownloaded. \r\n\r\nWe discussed storing audio files quite a bit, e.g. see: https:\/\/github.com\/huggingface\/datasets\/issues\/3663 and had (too many) changes around this topic recently, but we've come to the conclusion that the best is to leave the audio format in the format it was originally (`.flac` for Librispeech) so that the user can easily inspect it \/ understand the data. Arrow cannot save data is `.flac` so we'll just save a path to the original data. Curious to hear your guys opinion on this as well.","So what I would suggest here is to do the following:\r\n\r\n1. Do `load_dataset(..., cache_dir=\/a\/read-only\/folder)`\r\n2. \r\n- Either just re-use `load_dataset(..., cache_dir=...)` which should always re-use the data in the `cache_dir` since the hash of the url matches - so there should never be any duplicated downloading \r\n\r\nor \r\n\r\n- If you want to store the files in MP3 locally, first convert the files to MP3 in the read-only folder, then take do `ds.save_to_disk(\/some\/path)` which will save the correct path to the read-only folder to MP3 and then you can easily re-use the small arrow dataset that is saved in `\/some\/path`","> So what I would suggest here is to do the following:\r\n> \r\n> 1. Do `load_dataset(..., cache_dir=\/a\/read-only\/folder)`\r\n> \r\n> * Either just re-use `load_dataset(..., cache_dir=...)` which should always re-use the data in the `cache_dir` since the hash of the url matches - so there should never be any duplicated downloading\r\n> \r\n> or\r\n> \r\n> * If you want to store the files in MP3 locally, first convert the files to MP3 in the read-only folder, then take do `ds.save_to_disk(\/some\/path)` which will save the correct path to the read-only folder to MP3 and then you can easily re-use the small arrow dataset that is saved in `\/some\/path`\r\n\r\nAlso relevant here: https:\/\/github.com\/huggingface\/datasets\/issues\/3663","I also added some documentation about how `save_to_disk` handles audio files here: https:\/\/github.com\/huggingface\/datasets\/pull\/4193","> > So, you say we anyway need to share the cache dir among users?\r\n> \r\n> No, the cache dir can still be a directory in the job output folder.\r\n\r\n@dthulke But this is what I mean. When we share the job output folder, it means we share the cache dir among users.\r\n\r\nI wonder if `load_dataset(..., cache_dir=job_output_cache_dir)` is always save to do then, that it really would not modify the `job_output_cache_dir`.\r\n\r\nWe could enforce that by making the `job_output_cache_dir` read-only afterwards. We currently don't do this.\r\n\r\n@patrickvonplaten @dthulke But in any case, we actually prefer the data content to be inside the dataset (the arrow files). Lots of small files would be very problematic for our cache manager. We have one main copy of the data on NFS, but accessing the NFS directly by all computing nodes is not feasible, so the cache manager will have copies of the files on the nodes. So it means, whenever we access some file, we query the cache manager DB whether the file is already cached somewhere (some other computing node) and if so, it copies it from the other computing node and not from NFS. This works very well when there are not too many files (but the files can be big). So, we want to have only a few but big files. Even for NFS access this is much better.\r\n\r\nI also commented in #3663.\r\n","Hey @albertz @dthulke,\r\n\r\nThanks a lot for your input! \r\n\r\nWe've discussed quite a bit with @lhoestq and we think the best approach is the following:\r\n\r\n\r\na)\r\n`load_dataset(...)` will not store both bytes and the files because this would mean that 3x the size of the dataset would often be needed (1. the compressed `tar.gz` file, 2. the extracted file b, 3. the raw bytes in arrow format). \r\n\r\nFor canonical datasets like librispeech and common voice I think we want to keep the dataset filenames because of i) no breaking changes and ii) reasons explained in #3663\r\n\r\nHowever it's also trivial to write your own datasetset downloading script of librispeech and just not extract the folder e.g. this line: https:\/\/huggingface.co\/datasets\/common_voice\/blob\/main\/common_voice.py#L671\r\n\r\nAnd then it'll be allowed to save the bytes and the dataset will be self-contained out-of-the-box when using `load_dataset(...)`\r\n\r\nb) Now, one major problem that you guys uncovered is that `save_to_disk(...)` is currently not necessarily saving a dataset to be self-contained. We will change that asap. This means that after we've corrected this when you do download the canonical librispeech dataset the following will work:\r\n\r\n```python\r\nds = load_dataset(\"....\") # <- here we have a dependency on the filepathes\r\nds[0][\"audio\"][\"bytes\"] # <- will not work\r\n\r\nds.save_to_disk(\"\/local\/path\") # <- now we want to have a self-contained dataset in arrow format, so we load the files into bytes and save it in arrow format\r\n\r\n# now you can delete everything besides \"\/local\/path\"\r\n\r\nds = load_from_disk(\"\/local\/path\") # <- this will work\r\n```\r\n\r\nSo either option a) where you define your own librispeech data downloading script (you guys could just sign up here: https:\/\/huggingface.co\/join) and upload a dataset loading script in private mode so that no one can see it and you would always store the audio as bytes or b) where you first load then save to disk then delete cache would work. \r\n\r\nHope that fits in your vision :-)\r\n\r\ncc @lhoestq @mariosasko ","@patrickvonplaten sounds like a good approach to me. For b) this could even be configurable with a parameter like `embed_external_files` as you have for `push_to_hub` (if people prefer to keep separate audio files).\r\n","> However it's also trivial to write your own datasetset downloading script of librispeech and just not extract the folder\r\n\r\nI don't exactly understand. In all cases, we need to extract it to prepare the dataset, or not? No matter if we want to store the raw bytes inside the dataset or leaving them as local files. Just in the first case, we can safely delete the extracted files after the dataset preparation.\r\n\r\n> `save_to_disk(...)` is currently not necessarily saving a dataset to be self-contained. We will change that asap.\r\n\r\nFor us, this sounds exactly like what we want.\r\n\r\nBut regarding not introducing breaking changes, wouldn't this maybe also break some setups for users who don't expect this new behavior?\r\n","@albertz I would suggest to move the discussion on implementation details on our side to the following issue: rwth-i6\/i6_core\/issues\/257","I like the idea of adding `embed_external_files` and set it to True by default to `save_to_disk`.\r\nIt's indeed a kind of breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice:\r\n1. I like the idea of having it self contained, in case you want to delete your cache\r\n2. users also upload these Arrow files to cloud storage via the `fs` parameter, and in this case they would expect to upload a self-contained dataset\r\n3. consistency with `push_to_hub`\r\n\r\nIf it sounds good to you I'll open an issue to discuss this and track the advancements","Closed #4179."],"created_at":1650385676000,"updated_at":1661754957000,"closed_at":1650620717000,"author_association":"MEMBER","active_lock_reason":null,"body":"Add `\"all\"` config to Librispeech\r\n\r\nClosed #4179","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4184\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4184\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4184","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4184","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4184.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4184.patch","merged_at":1650620717000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4183","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4183\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4183\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4183\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4183","id":1208449335,"node_id":"PR_kwDODunzps42bjXn","number":4183,"title":"Document librispeech configs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think the main purpose of #4179 was how to be able to load both configs into one, so should we maybe add this part of the code: https:\/\/github.com\/huggingface\/datasets\/issues\/4179#issuecomment-1102383717 \r\n\r\nto the doc? \r\n\r\nActually @lhoestq would this work given that they have different split names: https:\/\/huggingface.co\/datasets\/librispeech_asr#data-splits ? ","This doc extension does not explain why I can't simply load the whole dataset. Or what workaround I need to get the whole dataset, which is what people usually want for Librispeech.","_The documentation is not available anymore as the PR was closed or merged._","@lhoestq, I can add a `\"all\"` config to Librispeech have the datasets already cached somewhere ","I'm closing this PR then, feel free to continue the discussion in https:\/\/github.com\/huggingface\/datasets\/issues\/4179\r\n"],"created_at":1650378419000,"updated_at":1650381696000,"closed_at":1650381320000,"author_association":"MEMBER","active_lock_reason":null,"body":"Added an example of how to load one config or the other","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4183\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4183\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4183","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4183","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4183.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4183.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4182","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4182\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4182\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4182\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4182","id":1208285235,"node_id":"I_kwDODunzps5IBPgz","number":4182,"title":"Zenodo.org download is not responding","user":{"login":"dkajtoch","id":32985207,"node_id":"MDQ6VXNlcjMyOTg1MjA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32985207?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dkajtoch","html_url":"https:\/\/github.com\/dkajtoch","followers_url":"https:\/\/api.github.com\/users\/dkajtoch\/followers","following_url":"https:\/\/api.github.com\/users\/dkajtoch\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dkajtoch\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dkajtoch\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dkajtoch\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dkajtoch\/orgs","repos_url":"https:\/\/api.github.com\/users\/dkajtoch\/repos","events_url":"https:\/\/api.github.com\/users\/dkajtoch\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dkajtoch\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["[Off topic but related: Is the uptime of S3 provably better than Zenodo's?]","Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.\r\n\r\nIt was the dataset owners decision to host their data at Zenodo. You can see this on their website: https:\/\/marcobaroni.org\/composes\/sick.html\r\n\r\nAnd yes, you are right: Zenodo is currently having some incidents and people are reporting problems from it.\r\n\r\nOn the other hand, we could contact the data owners and propose them to host their data at our Hugging Face Hub.\r\n\r\n@julien-c I guess so.\r\n","Thanks @albertvillanova. I know that the problem lies in the source data. I just wanted to point out that these kind of problems are unavoidable without having one place where data sources are cached. Websites may go down or data sources may move. Having a copy in Hugging Face Hub would be a great solution. ","Definitely, @dkajtoch! But we have to ask permission to the data owners. And many dataset licenses directly forbid data redistribution: in those cases we are not allowed to host their data on our Hub.","Ahhh good point! License is the problem :("],"created_at":1650371217000,"updated_at":1650438665000,"closed_at":1650438665000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nSource download_url from zenodo.org does not respond. \r\n`_DOWNLOAD_URL = \"https:\/\/zenodo.org\/record\/2787612\/files\/SICK.zip?download=1\"`\r\nOther datasets also use zenodo.org to store data and they cannot be downloaded as well.\r\n\r\nIt would be better to actually use more reliable way to store original data like s3 bucket.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nload_dataset(\"sick\")\r\n```\r\n\r\n## Expected results\r\nDataset should be downloaded.\r\n\r\n## Actual results\r\nConnectionError: Couldn't reach https:\/\/zenodo.org\/record\/2787612\/files\/SICK.zip?download=1 (ReadTimeout(ReadTimeoutError(\"HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out. (read timeout=100)\")))\r\n\r\n## Environment info\r\n- `datasets` version: 2.1.0\r\n- Platform: Darwin-21.4.0-x86_64-i386-64bit\r\n- Python version: 3.7.11\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.3.5\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4182\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4182\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4181","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4181\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4181\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4181\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4181","id":1208194805,"node_id":"I_kwDODunzps5IA5b1","number":4181,"title":"Support streaming FLEURS dataset","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.\r\n\r\nThat's because `download_and_extract` doesn't support TAR archives in streaming mode.","Tried to make it streamable, but I don't think it's really possible. @lhoestq @polinaeterna maybe you guys can check: \r\nhttps:\/\/huggingface.co\/datasets\/google\/fleurs\/commit\/dcf80160cd77977490a8d32b370c027107f2407b \r\n\r\nreal quick. \r\n\r\nI think the problem is that we cannot ensure that the metadata file is found before the audio. Or is this possible somehow @lhoestq ? ","@patrickvonplaten I think the metadata file should be found first because the audio files are contained in a folder next to the metadata files (just as in common voice), so the metadata files should be \"on top of the list\" as they are closer to the root in the directories hierarchy ","@patrickvonplaten but apparently it doesn't... I don't really know why.","Yeah! Any ideas what could be the reason here? cc @lhoestq ?","The order of the files is determined when the TAR archive is created, depending on the commands the creator ran.\r\nIf the metadata file is not at the beginning of the file, that makes streaming completely inefficient. In this case the TAR archive needs to be recreated in an appropriate order.","Actually we could maybe just host the metadata file ourselves and then stream the audio data only. Don't think that this would be a problem for the FLEURS authors (I can ask them :-)) ","I made a PR to their repo to support streaming (by uploading the metadata file to the Hub). See:\r\n- https:\/\/huggingface.co\/datasets\/google\/fleurs\/discussions\/4","I'm closing this issue as the PR above has been merged."],"created_at":1650366596000,"updated_at":1658749442000,"closed_at":1658749442000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Dataset viewer issue for '*name of the dataset*'\r\n\r\nhttps:\/\/huggingface.co\/datasets\/google\/fleurs\r\n\r\n```\r\nStatus code: 400\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https:\/\/storage.googleapis.com\/xtreme_translations\/FLEURS\/af_za.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```\r\n\r\nAm I the one who added this dataset ? Yes\r\n\r\nCan I fix this somehow in the script? @lhoestq @severo \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4181\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4181\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4180","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4180\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4180\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4180\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4180","id":1208042320,"node_id":"I_kwDODunzps5IAUNQ","number":4180,"title":"Add some iteration method on a dataset column (specific for inference)","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the suggestion ! I agree it would be nice to have something directly in `datasets` to do something as simple as that\r\n\r\ncc @albertvillanova @mariosasko @polinaeterna What do you think if we have something similar to pandas `Series` that wouldn't bring everything in memory when doing `dataset[\"audio\"]` ? Currently it returns a list with all the decoded audio data in memory.\r\n\r\nIt would be a breaking change though, since `isinstance(dataset[\"audio\"], list)` wouldn't work anymore, but we could implement a `Sequence` so that `dataset[\"audio\"][0]` still works and only loads one item in memory.\r\n\r\nYour alternative suggestion with `iterate` is also sensible, though maybe less satisfactory in terms of experience IMO","I agree that current behavior (decoding all audio file sin the dataset when accessing `dataset[\"audio\"]`) is not useful, IMHO. Indeed in our docs, we are constantly warning our collaborators not to do that.\r\n\r\nTherefore I upvote for a \"useful\" behavior of `dataset[\"audio\"]`. I don't think the breaking change is important in this case, as I guess no many people use it with its current behavior. Therefore, for me it seems reasonable to return a generator (instead of an in-memeory list) for \"special\" features, like Audio\/Image.\r\n\r\n@lhoestq on the other hand I don't understand your proposal about Pandas-like... ","I recall I had the same idea while working on the `Image` feature, so I agree implementing something similar to `pd.Series` that lazily brings elements in memory would be beneficial.","@lhoestq @mariosasko Could you please give a link to that new feature of `pandas.Series`? As far as I remember since I worked with pandas for more than 6 years, there was no lazy in-memory feature; it was everything in-memory; that was the reason why other frameworks were created, like Vaex or Dask, e.g. ","Yea pandas doesn't do lazy loading. I was referring to pandas.Series to say that they have a dedicated class to represent a column ;)"],"created_at":1650359745000,"updated_at":1650537058000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is.\r\n\r\nCurrently, `dataset[\"audio\"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.\r\nHaving an iterator (or sequence) type of object, would make inference with `transformers` 's `pipeline` easier to use and not so memory hungry.\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\nFor a non breaking change:\r\n\r\n```python\r\nfor audio in dataset.iterate(\"audio\"):\r\n # {\"array\": np.array(...), \"sampling_rate\":...}\r\n```\r\n\r\nFor a breaking change solution (not necessary), changing the type of `dataset[\"audio\"]` to a sequence type so that\r\n\r\n```python\r\npipe = pipeline(model=\"...\")\r\nfor out in pipe(dataset[\"audio\"]):\r\n # {\"text\":....}\r\n```\r\ncould work\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n```python\r\ndef iterate(dataset, key):\r\n for item in dataset:\r\n yield dataset[key]\r\n\r\nfor out in pipeline(iterate(dataset, \"audio\")):\r\n # {\"array\": ...}\r\n```\r\n\r\nThis works but requires the helper function which feels slightly clunky.\r\n\r\n**Additional context**\r\nAdd any other context about the feature request here.\r\n\r\nThe context is actually to showcase better integration between `pipeline` and `datasets` in the Quicktour demo: https:\/\/github.com\/huggingface\/transformers\/pull\/16723\/files\r\n\r\n@lhoestq \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4180\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4180\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4179","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4179\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4179\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4179\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4179","id":1208001118,"node_id":"I_kwDODunzps5IAKJe","number":4179,"title":"Dataset librispeech_asr fails to load","user":{"login":"albertz","id":59132,"node_id":"MDQ6VXNlcjU5MTMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59132?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertz","html_url":"https:\/\/github.com\/albertz","followers_url":"https:\/\/api.github.com\/users\/albertz\/followers","following_url":"https:\/\/api.github.com\/users\/albertz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertz\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertz\/repos","events_url":"https:\/\/api.github.com\/users\/albertz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@patrickvonplaten Hi! I saw that you prepared this? :)","Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https:\/\/www.openslr.org\/), it says:\r\n\r\n> If you want to download things from this site, please download them one at a time, and please don't use any fancy software-- just download things from your browser or use 'wget'. We have a firewall rule to drop connections from hosts with more than 5 simultaneous connections, and certain types of download software may activate this rule.\r\n\r\nRelated: https:\/\/github.com\/tensorflow\/datasets\/issues\/3885","Hey @albertz,\r\n\r\nNice to see you here! It's been a while ;-) ","Sorry maybe the docs haven't been super clear here. By `split` we mean one of `train.500`, `train.360`, `train.100`, `validation`, `test`. For Librispeech, you'll have to specific a config (either `other` or `clean`) though:\r\n\r\n```py\r\ndatasets.load_dataset(\"librispeech_asr\", \"clean\")\r\n```\r\n\r\nshould work and give you all splits (being \"train\", \"test\", ...) for the clean config of the dataset.\r\n","If you need both `\"clean\"` and `\"other\"` I think you'll have to do concatenate them as follows: \r\n\r\n```py\r\nfrom datasets import concatenate_datasets, load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\")\r\nclean = load_dataset(\"librispeech_asr\", \"clean\")\r\n\r\nlibrispeech = concatenate_datasets([other, clean])\r\n```\r\n\r\nSee https:\/\/huggingface.co\/docs\/datasets\/v2.1.0\/en\/process#concatenate","Downloading one split would be:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\", split=\"train.500\")\r\n```\r\n\r\n\r\n","cc @lhoestq FYI maybe the docs can be improved here","Ah thanks. But wouldn't it be easier\/nicer (and more canonical) to just make it in a way that simply `load_dataset(\"librispeech_asr\")` works?","Pinging @lhoestq here, think this could make sense! Not sure however how the dictionary would then look like","Would it make sense to have `clean` as the default config ?\r\n\r\nAlso I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nI also opened a PR to improve the doc: https:\/\/github.com\/huggingface\/datasets\/pull\/4183","> Would it make sense to have `clean` as the default config ?\r\n\r\nI think a user would expect that the default would give you the full dataset.\r\n\r\n> Also I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nIt does raise an error, but this error confused me because I did not understand why I needed a config, or why I could not simply download the whole dataset, which is what people usually do with Librispeech.\r\n","+1 for @albertz. Also think lots of people download the whole dataset (`\"clean\"` + `\"other\"`) for Librispeech.\r\n\r\nThink there are also some people though who:\r\n- a) Don't have the memory to store the whole dataset\r\n- b) Just want to evaluate on one of the two configs","Ok ! Adding the \"all\" configuration would do the job then, thanks ! In the \"all\" configuration we can merge all the train.xxx splits into one \"train\" split, or keep them separate depending on what's the most practical to use (probably put everything in \"train\" no ?)","I'm not too familiar with how to work with HuggingFace datasets, but people often do some curriculum learning scheme, where they start with train.100, later go over to train.100 + train.360, and then later use the whole train (960h). It would be good if this is easily possible.\r\n","Hey @albertz, \r\n\r\nopened a PR here. Think by adding the \"subdataset\" class to each split \"train\", \"dev\", \"other\" as shown here: https:\/\/github.com\/huggingface\/datasets\/pull\/4184\/files#r853272727 it should be easily possible (e.g. with the filter function https:\/\/huggingface.co\/docs\/datasets\/v2.1.0\/en\/package_reference\/main_classes#datasets.Dataset.filter )","But also since everything is cached one could also just do:\r\n\r\n```python\r\nload_dataset(\"librispeech\", \"clean\", \"train.100\")\r\nload_dataset(\"librispeech\", \"clean\", \"train.100+train.360\")\r\nload_dataset(\"librispeech\" \"all\", \"train\") \r\n```","Hi @patrickvonplaten ,\r\n\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?","Hmm, I don't really see how that's possible: https:\/\/github.com\/huggingface\/datasets\/blob\/d22e39a0693d4be7410cf9a5d41fd5aac22be3cc\/datasets\/librispeech_asr\/librispeech_asr.py#L51\r\n\r\nNote that all datasets related to `\"clean\"` are downloaded, but only `\"train.100\"` should be used. \r\n\r\ncc @lhoestq @albertvillanova @mariosasko can we do anything against download dataset links that are not related to the \"split\" that one actually needs. E.g. why should the split `\"train.360\"` be downloaded if for the user executes the above command:\r\n\r\n```py\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\")\r\n```","@patrickvonplaten This problem is a bit harder than it may seem, and it has to do with how our scripts are structured - `_split_generators` downloads data for a split before its definition. There was an attempt to fix this in https:\/\/github.com\/huggingface\/datasets\/pull\/2249, but it wasn't flexible enough. Luckily, I have a plan of attack, and this issue is on our short-term roadmap, so I'll work on it soon.\r\n\r\nIn the meantime, one can use streaming or manually download a dataset script, remove unwanted splits and load a dataset via `load_dataset`.","> load_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?\r\n\r\nSince this bug is still there and google led me here when I was searching for a solution, I am writing down how to quickly fix it (as suggested by @mariosasko) for whoever else is not familiar with how the HF Hub works.\r\n\r\nDownload the [librispeech_asr.py](https:\/\/huggingface.co\/datasets\/librispeech_asr\/blob\/main\/librispeech_asr.py) script and remove the unwanted splits both from the [`_DL_URLS` dictionary](https:\/\/huggingface.co\/datasets\/librispeech_asr\/blob\/main\/librispeech_asr.py#L47-L68) and from the [`_split_generators` function](https:\/\/huggingface.co\/datasets\/librispeech_asr\/blob\/main\/librispeech_asr.py#L121-L241).\r\n[Here ](https:\/\/huggingface.co\/datasets\/andreagasparini\/librispeech_test_only) I made an example with only the test sets.\r\n\r\nThen either save the script locally and load the dataset via \r\n```python\r\nload_dataset(\"${local_path}\/librispeech_asr.py\")\r\n```\r\n\r\nor [create a new dataset repo on the hub](https:\/\/huggingface.co\/new-dataset) named \"librispeech_asr\" and upload the script there, then you can just run\r\n```python\r\nload_dataset(\"${hugging_face_username}\/librispeech_asr\")\r\n```","Fixed by https:\/\/github.com\/huggingface\/datasets\/pull\/4184"],"created_at":1650357948000,"updated_at":1658938200000,"closed_at":1658938200000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nThe dataset librispeech_asr (standard Librispeech) fails to load.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndatasets.load_dataset(\"librispeech_asr\")\r\n```\r\n\r\n## Expected results\r\nIt should download and prepare the whole dataset (all subsets).\r\n\r\nIn [the doc](https:\/\/huggingface.co\/datasets\/librispeech_asr), it says it has two configurations (clean and other).\r\nHowever, the dataset doc says that not specifying `split` should just load the whole dataset, which is what I want.\r\n\r\nAlso, in case of this specific dataset, this is also the standard what the community uses. When you look at any publications with results on Librispeech, they always use the whole train dataset for training.\r\n\r\n## Actual results\r\n```\r\n...\r\n File \"\/home\/az\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/librispeech_asr\/1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c\/librispeech_asr.py\", line 119, in LibrispeechASR._split_generators\r\n line: archive_path = dl_manager.download(_DL_URLS[self.config.name])\r\n locals:\r\n archive_path = \r\n dl_manager = \r\n dl_manager.download = >\r\n _DL_URLS = {'clean': {'dev': 'http:\/\/www.openslr.org\/resources\/12\/dev-clean.tar.gz', 'test': 'http:\/\/www.openslr.org\/resources\/12\/test-clean.tar.gz', 'train.100': 'http:\/\/www.openslr.org\/resources\/12\/train-clean-100.tar.gz', 'train.360': 'http:\/\/www.openslr.org\/resources\/12\/train-clean-360.tar.gz'}, 'other'...\r\n self = \r\n self.config = BuilderConfig(name='default', version=0.0.0, data_dir='\/home\/az\/i6\/setups\/2022-03-20--sis\/work\/i6_core\/datasets\/huggingface\/DownloadAndPrepareHuggingFaceDatasetJob.TV6Nwm6dFReF\/output\/data_dir', data_files=None, description=None)\r\n self.config.name = 'default', len = 7\r\nKeyError: 'default'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.1.0\r\n- Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.9\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.4.2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4179\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4179\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4178","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4178\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4178\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4178\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4178","id":1207787073,"node_id":"PR_kwDODunzps42ZfFN","number":4178,"title":"[feat] Add ImageNet dataset","user":{"login":"apsdehal","id":3616806,"node_id":"MDQ6VXNlcjM2MTY4MDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616806?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/apsdehal","html_url":"https:\/\/github.com\/apsdehal","followers_url":"https:\/\/api.github.com\/users\/apsdehal\/followers","following_url":"https:\/\/api.github.com\/users\/apsdehal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/apsdehal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/apsdehal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/apsdehal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/apsdehal\/orgs","repos_url":"https:\/\/api.github.com\/users\/apsdehal\/repos","events_url":"https:\/\/api.github.com\/users\/apsdehal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/apsdehal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks for the comments. I believe I have addressed all of them and also decreased the size of the dummy data file, so it should be ready for a re-review. I also made a change to allow adding synset mapping and valprep script in config in case we add ImageNet 21k some time later. ","@lhoestq I have updated the PR to address all of the review comments."],"created_at":1650348095000,"updated_at":1651268639000,"closed_at":1651268228000,"author_association":"MEMBER","active_lock_reason":null,"body":"To use the dataset download the tar file\r\n[imagenet_object_localization_patched2019.tar.gz](https:\/\/www.kaggle.com\/competitions\/imagenet-object-localization-challenge\/data?select=imagenet_object_localization_patched2019.tar.gz) from Kaggle and then point the datasets library to it by using:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagenet\",\r\ndata_dir=\"\/path\/to\/imagenet_object_localization_patched2019.tar.gz\")\r\n```\r\n\r\nCurrently train and validation splits are supported.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4178\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4178\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4178","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4178","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4178.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4178.patch","merged_at":1651268228000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4177","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4177\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4177\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4177\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4177","id":1207535920,"node_id":"PR_kwDODunzps42Yxca","number":4177,"title":"Adding missing subsets to the `SemEval-2018 Task 1` dataset","user":{"login":"micahcarroll","id":11460267,"node_id":"MDQ6VXNlcjExNDYwMjY3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11460267?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/micahcarroll","html_url":"https:\/\/github.com\/micahcarroll","followers_url":"https:\/\/api.github.com\/users\/micahcarroll\/followers","following_url":"https:\/\/api.github.com\/users\/micahcarroll\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/micahcarroll\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/micahcarroll\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/micahcarroll\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/micahcarroll\/orgs","repos_url":"https:\/\/api.github.com\/users\/micahcarroll\/repos","events_url":"https:\/\/api.github.com\/users\/micahcarroll\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/micahcarroll\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1650322770000,"updated_at":1657120792000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"This dataset for the [1st task of SemEval-2018](https:\/\/competitions.codalab.org\/competitions\/17751) competition was missing all subtasks except for subtask 5. I added another two subtasks (subtask 1 and 2), which are each comprised of 12 additional data subsets: for each language in En, Es, Ar, there are 4 datasets, broken down by emotions (anger, fear, joy, sadness).\r\n\r\n## Remaining questions\r\n\r\nI wasn't able to find any documentation about how one should make PRs to modify datasets. Because of that, I just did my best to integrate the new data into the code, and tested locally that this worked. I'm sorry if I'm not respecting your contributing guidelines \u2013 if they are documented somewhere, I'd appreciate if you could send a pointer!\r\n\r\nNot sure how `dataset_infos.json` and `dummy` should be updated. My understanding is that they were automatically generated at the time of the original dataset creation?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4177\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4177\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4177","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4177","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4177.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4177.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4176","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4176\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4176\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4176\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4176","id":1206515563,"node_id":"I_kwDODunzps5H6fdr","number":4176,"title":"Very slow between two operations","user":{"login":"yananchen1989","id":26405281,"node_id":"MDQ6VXNlcjI2NDA1Mjgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26405281?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yananchen1989","html_url":"https:\/\/github.com\/yananchen1989","followers_url":"https:\/\/api.github.com\/users\/yananchen1989\/followers","following_url":"https:\/\/api.github.com\/users\/yananchen1989\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yananchen1989\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yananchen1989\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yananchen1989\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yananchen1989\/orgs","repos_url":"https:\/\/api.github.com\/users\/yananchen1989\/repos","events_url":"https:\/\/api.github.com\/users\/yananchen1989\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yananchen1989\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1650239549000,"updated_at":1650240180000,"closed_at":1650240180000,"author_association":"NONE","active_lock_reason":null,"body":"Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores. \r\n\r\nAlso, there is a significant lag between them. Am I missing something ?\r\n\r\n\r\n\r\n ```\r\nraw_datasets = raw_datasets.map(split_func, \r\n batched=False,\r\n num_proc=args.preprocessing_num_workers,\r\n load_from_cache_file=not args.overwrite_cache, \r\n desc = \"running split para ==>\")\\\r\n .filter(lambda example: example['text1']!='' and example['text2']!='', \r\n num_proc=args.preprocessing_num_workers, desc=\"filtering ==>\")\r\n\r\n\r\n processed_datasets = raw_datasets.map(\r\n preprocess_function,\r\n batched=True, \r\n num_proc=args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset===>\",\r\n )\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4176\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4176\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4175","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4175\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4175\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4175\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4175","id":1205589842,"node_id":"PR_kwDODunzps42SqF-","number":4175,"title":"Add WIT Dataset","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hi! Coming in late with some context.\r\n\r\nThere are two versions of the WIT dataset:\r\n1. The original source dataset managed by Wikimedia. It has more information, raw image representations, and each row corresponds to an image linked to all of its captions wherever it happens in Wikipedia (in multiple languages)\r\n2. The Google version, corresponding to the data script in this PR, which duplicates image instances and requires the user to download the images themselves from the provided URL (note that a basic implementation will have them download the same picture several time. @thomasw21 using our download manager instead of `urllib` could help with that, but it wouldn't be required if people had access to the first version)\r\n\r\nThe Wikimedia folks were really interested in us hosting a ready-to-go streaming version of this dataset where users don't have to download the version themselves, which is why we have the pre-processed versions on an HF bucket, with the raw images and a pre-computed embedding (don't remember the model, we can keep it ). That's the data script currently in https:\/\/github.com\/huggingface\/datasets\/pull\/2981 . It's nearly ready to go, the one thing we should do is move the raw data from our HF google Cloud bucket to the Hub.\r\n\r\nHow do you want to move forward? IMO the best way would be to have a WIT dataset under the Wikimedia org with both configurations, but it depends on everyone's timelines","Okay after offline discussion. We'll improve this versions and push it to the hub under `google` namespace. \r\n\r\n> which duplicates image instances and requires the user to download the images themselves from the provided URL (note that a basic implementation will have them download the same picture several time. @thomasw21 using our download manager instead of urllib could help with that, but it wouldn't be required if people had access to the first version)\r\n\r\nAh interesting wasn't aware of this duplication issue, concretely it'll just mean that our dataset in bigger than expected ... I think this should be handled after this loading script (though I have to figure our how to spawn a dl_manager).\r\n\r\n> The Wikimedia folks were really interested in us hosting a ready-to-go streaming version of this dataset where users don't have to download the version themselves, which is why we have the pre-processed versions on an HF bucket, with the raw images and a pre-computed embedding (don't remember the model, we can keep it ). That's the data script currently in https:\/\/github.com\/huggingface\/datasets\/pull\/2981 . It's nearly ready to go, the one thing we should do is move the raw data from our HF google Cloud bucket to the Hub.\r\n\r\nSimilarly a script will be written and pushed to `wikimedia` organisation.","@mariosasko can you make one last review concerning the text description changes? Then I'll handle putting it under `google` namespace and close this PR.","Looks all good now. Great job! ","Closing as this has been migrated to the hub under `google` namespace: https:\/\/huggingface.co\/datasets\/google\/wit"],"created_at":1650030152000,"updated_at":1651502041000,"closed_at":1651501601000,"author_association":"MEMBER","active_lock_reason":null,"body":"closes #2981 #2810\r\n\r\n@nateraw @hassiahk I've listed you guys as co-author as you've contributed previously to this dataset","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4175\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4175\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4175","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4175","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4175.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4175.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4174","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4174\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4174\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4174\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4174","id":1205575941,"node_id":"PR_kwDODunzps42SnJS","number":4174,"title":"Fix when map function modifies input in-place","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1650028995000,"updated_at":1650034327000,"closed_at":1650033958000,"author_association":"MEMBER","active_lock_reason":null,"body":"When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4174\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4174\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4174","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4174","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4174.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4174.patch","merged_at":1650033958000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4173","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4173\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4173\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4173\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4173","id":1204657114,"node_id":"PR_kwDODunzps42Ppnd","number":4173,"title":"Stream private zipped images","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","oops looks like some tests are failing sorry, will fix them tomorrow\r\n\r\nEDIT: not today but asap hopefully","cc @mariosasko this is ready for review, let me know what you think !"],"created_at":1649949307000,"updated_at":1651759554000,"closed_at":1651759115000,"author_association":"MEMBER","active_lock_reason":null,"body":"As mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/4139 it's currently not possible to stream private\/gated zipped images from the Hub.\r\n\r\nThis is because `Image.decode_example` does not handle authentication. Indeed decoding requires to access and download the file from the private repository.\r\n\r\nIn this PR I added authentication to `Image.decode_example` via a `token_per_repo_id` optional argument. I first wanted to just pass `use_auth_token` but a single `Image` instance can be responsible of decoding images from a combination of several datasets together (from `interleave_datasets` for example). Therefore I just used a dictionary `repo_id` -> `token` instead.\r\n\r\nI'm getting the `repo_id` from the dataset builder (I replaced the `namepace` attribute with `repo_id`)\r\n\r\nI did the same for `Audio.decode_example`\r\n\r\ncc @SBrandeis @severo ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4173\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4173\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4173","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4173","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4173.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4173.patch","merged_at":1651759115000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4172","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4172\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4172\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4172\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4172","id":1204433160,"node_id":"PR_kwDODunzps42O7LW","number":4172,"title":"Update assin2 dataset_infos.json","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649937186000,"updated_at":1650034062000,"closed_at":1650033682000,"author_association":"MEMBER","active_lock_reason":null,"body":"Following comments in https:\/\/github.com\/huggingface\/datasets\/issues\/4003 we found that it was outdated and casing an error when loading the dataset","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4172\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4172\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4172","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4172","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4172.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4172.patch","merged_at":1650033682000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4170","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4170\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4170\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4170\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4170","id":1204413620,"node_id":"PR_kwDODunzps42O2-L","number":4170,"title":"to_tf_dataset rewrite","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","[Magic is now banned](https:\/\/www.youtube.com\/watch?v=WIn58XoY728#t=36s) by decree of @sgugger. This is honestly much cleaner, and the functionality will make much more sense in `transformers` anyway!","@gante I renamed the default collator to `minimal_tf_collate_fn`!","@lhoestq @sgugger @gante \r\n\r\nI think this should now be ready, it looks good in testing! I'll try a few more notebooks today and tomorrow to be sure before I merge. Key changes are:\r\n\r\n- No column autodetection magic (will make a separate PR to add this as a `transformers` function)\r\n- Drops non-numerical features automatically (this is more of a 'DataLoader' method, we'll have a separate method to expose 'raw' datasets to `tf.data`)\r\n- Better autodetection of numerical features.\r\n- Shouldn't randomly crash mid-function :skull: \r\n\r\nWe definitely have some questions still to resolve about how to handle making a 'DataLoader' dataset versus a 'raw' dataset - see [the Notion doc](https:\/\/www.notion.so\/huggingface2\/Splitting-to_tf_dataset-c2e0773c4bec484384064b30ed634383) if you're interested. Still, since this PR is just fixes\/improvements to an existing method which never supported non-numerical features anyway, we can merge it before we've resolved those issues, and then think about how to name and split things afterwards.","P.S. I'll take out the region comments at the end before I merge, I promise! They're just helpful while I'm editing it","+1 for the tests\r\n\r\n> Drops non-numerical features automatically\r\n\r\nCan you give more details on how this work and the rationale as well ? This is not explained in the docs\r\n\r\nAlso why are you adding `error_on_missing` and `auto_fix_label_names ` ? The rationale is not clear to me. In particular I think it is sensible enough to expect users to not ask columns that don't exist, and to rename a label column when required.","@lhoestq I rewrote those parts - they were causing some other issues too! `error_on_missing` and `auto_fix_label_names` have been removed. The new logic is to simply drop (before batch collation) all columns the user doesn't ask for, but not to raise errors if the user asked for columns not in the dataset, as they may be added by the collator. Hopefully this cleans it up and matches the documentation better!","@lhoestq New tests are now in!","Seeing some other random tests failing that don't look to be associated with this PR.","@lhoestq I can't figure out these test failures! They don't seem related to this PR at all, but I rebased to the latest version and they keep happening, even though they're not visible on master.","Thanks for the ping, will take a look tomorrow :)\r\n\r\nMaybe the rebase didn't go well for the code recently merged about label alignment from https:\/\/github.com\/huggingface\/datasets\/pull\/4277 ?","It's very strange! The rebase looks fine to me. I might try to move my changes to a new branch from `master` and see if I can figure out which change causes this problem to appear.","@lhoestq Got it! It was caused by a name collision - I was importing `typing.Sequence`, but the code also needed `features.Sequence`. The tests from that PR were expecting the latter but got the former, and then crashed.","@lhoestq Thanks! Also, when you're ready, don't merge it immediately! I'd like to do a quick round of manual testing with the very final build once you're happy to make sure it still works in our notebooks and examples.","@lhoestq Tests look good to me, merging now!"],"created_at":1649935858000,"updated_at":1654525872000,"closed_at":1654525329000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are:\r\n\r\n- Much better stability and no more dropping unexpected column names (Sorry @NielsRogge)\r\n- Doesn't clobber custom transforms on the data (Sorry @NielsRogge again)\r\n- Much better handling of the situation when the `collate_fn` adds columns that aren't in the dataset.\r\n- Better inference of shapes and data types\r\n- Lots of hacky special-casing code removed\r\n- Can return string columns (as `tf.String`)\r\n- Most arguments have default values, calling the method should be much simpler\r\n- ~~Can accept a `model` argument and only return columns that are valid inputs to that model~~\r\n- Drops the `dummy_labels` argument - this was a workaround for Keras issues that have been resolved by changes in `transformers`. Also remove it from tests and the Overview notebook.\r\n\r\nI still have a couple of TODOs remaining and some testing to do, so don't merge yet, but it should be mostly ready for review at this point!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4170\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4170\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4170","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4170","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4170.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4170.patch","merged_at":1654525329000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4169","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4169\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4169\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4169\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4169","id":1203995869,"node_id":"I_kwDODunzps5Hw4Td","number":4169,"title":"Timit_asr dataset cannot be previewed recently","user":{"login":"YingLi001","id":75192317,"node_id":"MDQ6VXNlcjc1MTkyMzE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75192317?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YingLi001","html_url":"https:\/\/github.com\/YingLi001","followers_url":"https:\/\/api.github.com\/users\/YingLi001\/followers","following_url":"https:\/\/api.github.com\/users\/YingLi001\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YingLi001\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YingLi001\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YingLi001\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YingLi001\/orgs","repos_url":"https:\/\/api.github.com\/users\/YingLi001\/repos","events_url":"https:\/\/api.github.com\/users\/YingLi001\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YingLi001\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting. The bug has already been detected, and we hope to fix it soon.","TIMIT is now a dataset that requires manual download, see #4145 \r\n\r\nTherefore it might take a bit more time to fix it","> TIMIT is now a dataset that requires manual download, see #4145\r\n> \r\n> Therefore it might take a bit more time to fix it\r\n\r\nThank you for your quickly response. Exactly, I also found the manual download issue in the morning. But when I used *list_datasets()* to check the available datasets, *'timit_asr'* is still in the list. So I am a little bit confused. If *'timit_asr'* need to be manually downloaded, does that mean we can **not** automatically download it **any more** in the future?","Yes exactly. If you try to load the dataset it will ask you to download it manually first, and to pass the downloaded and extracted data like `load_dataset(\"timir_asr\", data_dir=\"path\/to\/extracted\/data\")`\r\n\r\nThe URL we were using was coming from a host that doesn't have the permission to redistribute the data, and the dataset owners (LDC) notified us about it."],"created_at":1649906911000,"updated_at":1651853211000,"closed_at":1651853211000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for '*timit_asr*'\r\n\r\n**Link:** *https:\/\/huggingface.co\/datasets\/timit_asr*\r\n\r\nIssue: The timit-asr dataset cannot be previewed recently.\r\n\r\nAm I the one who added this dataset ? Yes-No\r\nNo","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4169\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4169\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4168","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4168\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4168\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4168\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4168","id":1203867540,"node_id":"PR_kwDODunzps42NL6F","number":4168,"title":"Add code examples to API docs","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","> Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer.\r\n\r\nI think it's ok to be repetitive to get more clarity. Many users come from `transformers` and may have little experience with some processing methods (especially torch users).\r\n\r\n> Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?\r\n\r\nMaybe let's do it case by case, depending on whether there are parameters that are likely to be used often ?\r\n\r\n> For the class_encode_column function, let me know if there is a simpler dataset with fewer columns (currently using winograd_wsc) so it is easier for users to see what changed.\r\n\r\nYou can try with `boolq`, it has a boolean column that can be converted to labels\r\n\r\n> Where possible, I try to show the input before and the output after using a function like flatten for example. Do you think this is too much and just showing the usage (ie, >>> ds.flatten()) will be sufficient?\r\n\r\nNo I don't think it's too much, it's nice this way thanks :)","Updated each code example so they are fully reproducible (where applicable)! The next step will be to identify some functions where we can show off some parameters that are useful or commonly used. Some useful parameters can be:\r\n\r\n- use `map(batched=True)` to process batches of examples.\r\n- set a seed in `shuffle`.\r\n- set `shuffle` and `seed` in `train_test_split`.\r\n\r\nLet me know if you think of anything else related to the functions in `arrow_dataset.py`!","Cool thanks ! I think you can also do `num_proc` for `map`"],"created_at":1649891018000,"updated_at":1651085617000,"closed_at":1651085314000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on:\r\n\r\n- Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer. Personally, I think we might be able to get away with not including this since users probably want to try the function on their own dataset. For example:\r\n\r\n ```py\r\n >>> from datasets import load_dataset\r\n >>> ds = load_dataset(\"rotten_tomatoes\", split=\"validation\")\r\n >>> code example goes here\r\n ```\r\n\r\n- Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?\r\n- For the `class_encode_column` function, let me know if there is a simpler dataset with fewer columns (currently using `winograd_wsc`) so it is easier for users to see what changed.\r\n- Where possible, I try to show the input before and the output after using a function like `flatten` for example. Do you think this is too much and just showing the usage (ie, `>>> ds.flatten()`) will be sufficient?\r\n\r\nThanks :)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4168\/reactions","total_count":2,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":2,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4168\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4168","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4168","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4168.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4168.patch","merged_at":1651085314000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4167","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4167\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4167\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4167\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4167","id":1203761614,"node_id":"PR_kwDODunzps42M1O5","number":4167,"title":"Avoid rate limit in update hub repositories","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I also set GIT_LFS_SKIP_SMUDGE=1 to speed up git clones","_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649881937000,"updated_at":1649883401000,"closed_at":1649883032000,"author_association":"MEMBER","active_lock_reason":null,"body":"use http.extraHeader to avoid rate limit","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4167\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4167\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4167","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4167","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4167.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4167.patch","merged_at":1649883032000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4166","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4166\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4166\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4166\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4166","id":1203758004,"node_id":"PR_kwDODunzps42M0dS","number":4166,"title":"Fix exact match","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649881686000,"updated_at":1651580611000,"closed_at":1651580187000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Clarify docs and add clarifying example to the exact_match metric","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4166\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4166\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4166","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4166","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4166.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4166.patch","merged_at":1651580187000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4165","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4165\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4165\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4165\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4165","id":1203730187,"node_id":"PR_kwDODunzps42MubF","number":4165,"title":"Fix google bleu typos, examples","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649879994000,"updated_at":1651580632000,"closed_at":1651580204000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4165\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4165\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4165","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4165","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4165.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4165.patch","merged_at":1651580204000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4164","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4164\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4164\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4164\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4164","id":1203661346,"node_id":"PR_kwDODunzps42MfxX","number":4164,"title":"Fix duplicate key in multi_news","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649875704000,"updated_at":1649883856000,"closed_at":1649883482000,"author_association":"MEMBER","active_lock_reason":null,"body":"To merge after this job succeeded: https:\/\/github.com\/huggingface\/datasets\/runs\/6012207928","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4164\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4164\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4164","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4164","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4164.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4164.patch","merged_at":1649883482000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4163","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4163\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4163\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4163\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4163","id":1203539268,"node_id":"I_kwDODunzps5HvI1E","number":4163,"title":"Optional Content Warning for Datasets","user":{"login":"TristanThrush","id":20826878,"node_id":"MDQ6VXNlcjIwODI2ODc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20826878?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TristanThrush","html_url":"https:\/\/github.com\/TristanThrush","followers_url":"https:\/\/api.github.com\/users\/TristanThrush\/followers","following_url":"https:\/\/api.github.com\/users\/TristanThrush\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TristanThrush\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TristanThrush\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TristanThrush\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TristanThrush\/orgs","repos_url":"https:\/\/api.github.com\/users\/TristanThrush\/repos","events_url":"https:\/\/api.github.com\/users\/TristanThrush\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TristanThrush\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! You can use the `extra_gated_prompt` YAML field in a dataset card for displaying custom messages\/warnings that the user must accept before gaining access to the actual dataset. This option also keeps the viewer hidden until the user agrees to terms. ","Hi @mariosasko, thanks for explaining how to add this feature. \r\n\r\nIf the current dataset yaml is:\r\n```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K\r\ndatasets\/conceptual_12m\/dummy\/default\/0.0.0\/dummy_data.zip"],"created_at":1649861843000,"updated_at":1650010381000,"closed_at":1650009985000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4162\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4162\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4162","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4162","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4162.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4162.patch","merged_at":1650009985000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4161","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4161\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4161\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4161\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4161","id":1203230485,"node_id":"PR_kwDODunzps42LEhi","number":4161,"title":"Add Visual Genome","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Hum there seems to be some issues with tasks in test:\r\n - some tasks don't fit anything in `tasks.json`. Do I remove them in `task_categories`?\r\n - some tasks should exist, typically `visual-question-answering` (https:\/\/github.com\/huggingface\/datasets\/blame\/9f2ff14673cac1f1ad56d80221a793f5938b68c7\/src\/datasets\/utils\/resources\/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my `master` is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n \r\n cc @mariosasko @lhoestq ","> some tasks don't fit anything in tasks.json. Do I remove them in task_categories?\r\n\r\nYou can keep them, but add `other-` as a prefix to those tasks to make the CI ignore it\r\n\r\n> some tasks should exist, typically visual-question-answering (https:\/\/github.com\/huggingface\/datasets\/blame\/9f2ff14673cac1f1ad56d80221a793f5938b68c7\/src\/datasets\/utils\/resources\/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my master is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n\r\nFeel free to merge upstream\/master into your branch ;)\r\n\r\nEDIT: actually I just noticed you've already done this, thanks !","After offline discussions: will keep that image essentially it's necessary as I have a mapping that creates a mapping between url and local path (images are downloaded via a zip file) and dummy data needs to store that dummy image. The issue is when I read an annotation, I get a url, compute the local path, and basically I assume the local path exists since I've extracted all the images ... This isn't true if dummy data doesn't have all the images, so instead I've added a script that \"fixes\" the dummy data after using the CLI, it essentially adds the dummy image in the zip corresponding to the url."],"created_at":1649852724000,"updated_at":1650555769000,"closed_at":1650546532000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4161\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4161\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4161","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4161","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4161.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4161.patch","merged_at":1650546532000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4160","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4160\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4160\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4160\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4160","id":1202845874,"node_id":"I_kwDODunzps5Hsfiy","number":4160,"title":"RGBA images not showing","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"},{"id":4030246674,"node_id":"LA_kwDODunzps7wOK8S","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer-rgba-images","name":"dataset-viewer-rgba-images","color":"6C5FC0","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting. It's a known issue, and we hope to fix it soon.","Fixed, thanks!"],"created_at":1649833163000,"updated_at":1655829791000,"closed_at":1655829791000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Dataset viewer issue for ceyda\/smithsonian_butterflies_transparent\r\n\r\n[**Link:** *link to the dataset viewer page*](https:\/\/huggingface.co\/datasets\/ceyda\/smithsonian_butterflies_transparent)\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/15624271\/163117683-e91edb28-41bf-43d9-b371-5c62e14f40c9.png)\r\n\r\nAm I the one who added this dataset ? Yes\r\n\r\n\ud83d\udc49 More of a general issue of 'RGBA' png images not being supported \r\n(the dataset itself is just for the huggan sprint and not that important, consider it just an example)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4160\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4160\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4159","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4159\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4159\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4159\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4159","id":1202522153,"node_id":"PR_kwDODunzps42Izmd","number":4159,"title":"Add `TruthfulQA` dataset","user":{"login":"jon-tow","id":41410219,"node_id":"MDQ6VXNlcjQxNDEwMjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41410219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jon-tow","html_url":"https:\/\/github.com\/jon-tow","followers_url":"https:\/\/api.github.com\/users\/jon-tow\/followers","following_url":"https:\/\/api.github.com\/users\/jon-tow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jon-tow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jon-tow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jon-tow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jon-tow\/orgs","repos_url":"https:\/\/api.github.com\/users\/jon-tow\/repos","events_url":"https:\/\/api.github.com\/users\/jon-tow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jon-tow\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Bump. (I'm not sure which reviewer to `@` but, previously, @lhoestq has been very helpful \ud83e\udd17 )"],"created_at":1649805544000,"updated_at":1654703493000,"closed_at":1654699414000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4159\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4159\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4159","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4159","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4159.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4159.patch","merged_at":1654699414000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4158","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4158\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4158\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4158\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4158","id":1202376843,"node_id":"PR_kwDODunzps42ITg3","number":4158,"title":"Add AUC ROC Metric","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649796808000,"updated_at":1651002110000,"closed_at":1651001722000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4158\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4158\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4158","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4158","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4158.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4158.patch","merged_at":1651001722000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4157","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4157\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4157\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4157\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4157","id":1202239622,"node_id":"PR_kwDODunzps42H2Wf","number":4157,"title":"Fix formatting in BLEU metric card","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649788191000,"updated_at":1649860225000,"closed_at":1649859394000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4148 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4157\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4157\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4157","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4157","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4157.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4157.patch","merged_at":1649859394000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4156","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4156\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4156\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4156\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4156","id":1202220531,"node_id":"PR_kwDODunzps42HySw","number":4156,"title":"Adding STSb-TR dataset","user":{"login":"figenfikri","id":12762065,"node_id":"MDQ6VXNlcjEyNzYyMDY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12762065?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/figenfikri","html_url":"https:\/\/github.com\/figenfikri","followers_url":"https:\/\/api.github.com\/users\/figenfikri\/followers","following_url":"https:\/\/api.github.com\/users\/figenfikri\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/figenfikri\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/figenfikri\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/figenfikri\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/figenfikri\/orgs","repos_url":"https:\/\/api.github.com\/users\/figenfikri\/repos","events_url":"https:\/\/api.github.com\/users\/figenfikri\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/figenfikri\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649787005000,"updated_at":1657120792000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"Semantic Textual Similarity benchmark Turkish (STSb-TR) dataset introduced in our paper [Semantic Similarity Based Evaluation for Abstractive News Summarization](https:\/\/aclanthology.org\/2021.gem-1.3.pdf) added.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4156\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4156\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4156","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4156","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4156.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4156.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4155","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4155\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4155\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4155\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4155","id":1202183608,"node_id":"PR_kwDODunzps42Hqam","number":4155,"title":"Make HANS dataset streamable","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649784853000,"updated_at":1649851426000,"closed_at":1649851055000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fix #4133 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4155\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4155\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4155","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4155","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4155.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4155.patch","merged_at":1649851054000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4154","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4154\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4154\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4154\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4154","id":1202145721,"node_id":"PR_kwDODunzps42Hh14","number":4154,"title":"Generate tasks.json taxonomy from `huggingface_hub`","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Ok recomputed the json file, this should be ready to review now! @lhoestq ","Note: the generated JSON from `hf\/hub-docs` can be found in the output of a GitHub Action run on that repo, for instance in https:\/\/github.com\/huggingface\/hub-docs\/runs\/6006686983?check_suite_focus=true\r\n\r\n(click on \"Run export-tasks script\")","Should we not add the tasks with hideInDatasets?","yes, probably true \u2013 i'll change that in a PR in `hub-docs`","Yes that's good :) feel free to merge","thanks to the both of you!"],"created_at":1649783566000,"updated_at":1649932352000,"closed_at":1649931973000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4154\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4154\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4154","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4154","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4154.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4154.patch","merged_at":1649931973000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4153","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4153\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4153\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4153\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4153","id":1202040506,"node_id":"PR_kwDODunzps42HLA8","number":4153,"title":"Adding Text-based NP Enrichment (TNE) dataset","user":{"login":"yanaiela","id":8031035,"node_id":"MDQ6VXNlcjgwMzEwMzU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8031035?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yanaiela","html_url":"https:\/\/github.com\/yanaiela","followers_url":"https:\/\/api.github.com\/users\/yanaiela\/followers","following_url":"https:\/\/api.github.com\/users\/yanaiela\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yanaiela\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yanaiela\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yanaiela\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yanaiela\/orgs","repos_url":"https:\/\/api.github.com\/users\/yanaiela\/repos","events_url":"https:\/\/api.github.com\/users\/yanaiela\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yanaiela\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey @lhoestq, can you please have a look? \ud83d\ude4f","Great, thanks again @lhoestq! I think we're good to go now","Done"],"created_at":1649778423000,"updated_at":1651586748000,"closed_at":1651586748000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Added the [TNE](https:\/\/github.com\/yanaiela\/TNE) dataset to the library","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4153\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4153\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4153","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4153","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4153.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4153.patch","merged_at":1651586748000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4152","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4152\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4152\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4152\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4152","id":1202034115,"node_id":"I_kwDODunzps5HpZXD","number":4152,"title":"ArrayND error in pyarrow 5","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Where do we bump the required pyarrow version? Any inputs on how I fix this issue? ","We need to bump it in `setup.py` as well as update some CI job to use pyarrow 6 instead of 5 in `.circleci\/config.yaml` and `.github\/workflows\/benchmarks.yaml`"],"created_at":1649778100000,"updated_at":1651656586000,"closed_at":1651656586000,"author_association":"MEMBER","active_lock_reason":null,"body":"As found in https:\/\/github.com\/huggingface\/datasets\/pull\/3903, The ArrayND features fail on pyarrow 5:\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets import Array2D\r\nfrom datasets.table import cast_array_to_feature\r\n\r\narr = pa.array([[[0]]])\r\nfeature_type = Array2D(shape=(1, 1), dtype=\"int64\")\r\ncast_array_to_feature(arr, feature_type)\r\n```\r\nraises\r\n```python\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n in \r\n----> 1 cast_array_to_feature(pa.array([[[0]]]), Array2D(shape=(1, 1), dtype=\"int32\"))\r\n\r\n~\/Desktop\/hf\/datasets\/src\/datasets\/table.py in wrapper(array, *args, **kwargs)\r\n 1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n 1673 else:\r\n-> 1674 return func(array, *args, **kwargs)\r\n 1675 \r\n 1676 return wrapper\r\n\r\n~\/Desktop\/hf\/datasets\/src\/datasets\/table.py in cast_array_to_feature(array, feature, allow_number_to_str)\r\n 1806 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)\r\n 1807 elif not isinstance(feature, (Sequence, dict, list, tuple)):\r\n-> 1808 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n 1809 raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\n 1810 \r\n\r\n~\/Desktop\/hf\/datasets\/src\/datasets\/table.py in wrapper(array, *args, **kwargs)\r\n 1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n 1673 else:\r\n-> 1674 return func(array, *args, **kwargs)\r\n 1675 \r\n 1676 return wrapper\r\n\r\n~\/Desktop\/hf\/datasets\/src\/datasets\/table.py in array_cast(array, pa_type, allow_number_to_str)\r\n 1705 array = array.storage\r\n 1706 if isinstance(pa_type, pa.ExtensionType):\r\n-> 1707 return pa_type.wrap_array(array)\r\n 1708 elif pa.types.is_struct(array.type):\r\n 1709 if pa.types.is_struct(pa_type) and (\r\n\r\nAttributeError: 'Array2DExtensionType' object has no attribute 'wrap_array'\r\n```\r\n\r\nThe thing is that `cast_array_to_feature` is called when writing an Arrow file, so creating an Arrow dataset using any ArrayND type currently fails.\r\n\r\n`wrap_array` has been added in pyarrow 6, so we can either bump the required pyarrow version or fix this for pyarrow 5","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4152\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4152\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4151","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4151\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4151\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4151\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4151","id":1201837999,"node_id":"PR_kwDODunzps42GgLu","number":4151,"title":"Add missing label for emotion description","user":{"login":"lijiazheng99","id":44396506,"node_id":"MDQ6VXNlcjQ0Mzk2NTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44396506?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lijiazheng99","html_url":"https:\/\/github.com\/lijiazheng99","followers_url":"https:\/\/api.github.com\/users\/lijiazheng99\/followers","following_url":"https:\/\/api.github.com\/users\/lijiazheng99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lijiazheng99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lijiazheng99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lijiazheng99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lijiazheng99\/orgs","repos_url":"https:\/\/api.github.com\/users\/lijiazheng99\/repos","events_url":"https:\/\/api.github.com\/users\/lijiazheng99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lijiazheng99\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649769457000,"updated_at":1649771930000,"closed_at":1649771930000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4151\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4151\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4151","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4151","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4151.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4151.patch","merged_at":1649771930000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4150","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4150\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4150\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4150\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4150","id":1201689730,"node_id":"I_kwDODunzps5HoFSC","number":4150,"title":"Inconsistent splits generation for datasets without loading script (packaged dataset puts everything into a single split)","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649762155000,"updated_at":1651179764000,"closed_at":1651179764000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nSplits for dataset loaders without scripts are prepared inconsistently. I think it might be confusing for users.\r\n\r\n## Steps to reproduce the bug\r\n* If you load a packaged datasets from Hub, it infers splits from directory structure \/ filenames (check out the data [here](https:\/\/huggingface.co\/datasets\/nateraw\/test-imagefolder-dataset)):\r\n```python\r\nds = load_dataset(\"nateraw\/test-imagefolder-dataset\")\r\nprint(ds)\r\n### Output:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 6\r\n })\r\n test: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 4\r\n })\r\n})\r\n```\r\n* If you do the same from locally stored data specifying only directory path you'll get the same:\r\n```python\r\nds = load_dataset(\"\/path\/to\/local\/data\/test-imagefolder-dataset\")\r\nprint(ds)\r\n### Output:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 6\r\n })\r\n test: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 4\r\n })\r\n})\r\n```\r\n* However, if you explicitely specify package name (like `imagefolder`, `csv`, `json`), all the data is put into a single split:\r\n```python\r\nds = load_dataset(\"imagefolder\", data_dir=\"\/path\/to\/local\/data\/test-imagefolder-dataset\")\r\nprint(ds)\r\n### Output:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 10\r\n })\r\n})\r\n```\r\n\r\n## Expected results\r\nFor `load_dataset(\"imagefolder\", data_dir=\"\/path\/to\/local\/data\/test-imagefolder-dataset\")` I expect the same output as of the two first options.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4150\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4150\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4149","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4149\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4149\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4149\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4149","id":1201389221,"node_id":"I_kwDODunzps5Hm76l","number":4149,"title":"load_dataset for winoground returning decoding error","user":{"login":"odellus","id":4686956,"node_id":"MDQ6VXNlcjQ2ODY5NTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4686956?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/odellus","html_url":"https:\/\/github.com\/odellus","followers_url":"https:\/\/api.github.com\/users\/odellus\/followers","following_url":"https:\/\/api.github.com\/users\/odellus\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/odellus\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/odellus\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/odellus\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/odellus\/orgs","repos_url":"https:\/\/api.github.com\/users\/odellus\/repos","events_url":"https:\/\/api.github.com\/users\/odellus\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/odellus\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I thought I had fixed it with this after some helpful hints from @severo\r\n```python\r\nimport datasets \r\ntoken = 'hf_XXXXX'\r\ndataset = datasets.load_dataset(\r\n 'facebook\/winoground', \r\n name='facebook--winoground', \r\n split='train', \r\n streaming=True,\r\n use_auth_token=token,\r\n)\r\n```\r\nbut I found out that wasn't the case\r\n```python\r\n[x for x in dataset]\r\n...\r\nClientResponseError: 401, message='Unauthorized', url=URL('https:\/\/huggingface.co\/datasets\/facebook\/winoground\/resolve\/a86a60456fbbd242e9a744199071a6bd3e7fd9de\/examples.jsonl')\r\n```","Hi ! This dataset structure (image + labels in a JSON file) is not supported yet, though we're adding support for this in in #4069 \r\n\r\nThe following structure will be supported soon:\r\n```\r\nmetadata.json\r\nimages\/\r\n image0.png\r\n image1.png\r\n ...\r\n```\r\nWhere `metadata.json` is a JSON Lines file with labels or other metadata, and each line must have a \"file_name\" field with the name of the image file.\r\n\r\nFor the moment are only supported:\r\n- JSON files only\r\n- image files only\r\n\r\nSince this dataset is a mix of the two, at the moment it fails trying to read the images as JSON.\r\n\r\nTherefore to be able to load this dataset we need to wait for the new structure to be supported (very soon ^^), or add a dataset script in the repository that reads both the JSON and the images cc @TristanThrush \r\n","We'll also investigate the issue with the streaming download manager in https:\/\/github.com\/huggingface\/datasets\/issues\/4139 ;) thanks for reporting","Are there any updates on this?","In the meantime, anyone can always download the images.zip and examples.jsonl files directly from huggingface.co - let me know if anyone has issues with that.","I mirrored the files at https:\/\/huggingface.co\/datasets\/facebook\/winoground in a folder on my local machine `winground`\r\nand when I tried\r\n```python\r\nimport datasets\r\nds = datasets.load_from_disk('.\/winoground')\r\n```\r\nI get the following error\r\n```python\r\n--------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [2], in ()\r\n----> 1 ds = datasets.load_from_disk('.\/winoground')\r\n\r\nFile ~\/.local\/lib\/python3.8\/site-packages\/datasets\/load.py:1759, in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1757 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1758 else:\r\n-> 1759 raise FileNotFoundError(\r\n 1760 f\"Directory {dataset_path} is neither a dataset directory nor a dataset dict directory.\"\r\n 1761 )\r\n\r\nFileNotFoundError: Directory .\/winoground is neither a dataset directory nor a dataset dict directory.\r\n```\r\nso still some work to be done on the backend imo.","Note that `load_from_disk` is the function that reloads an Arrow dataset saved with `my_dataset.save_to_disk`.\r\n\r\nOnce we do support images with metadata you'll be able to use `load_dataset(\"facebook\/winoground\")` directly (or `load_dataset(\".\/winoground\")` of you've cloned the winoground repository locally).","Apologies for the delay. I added a custom dataset loading script for winoground. It should work now, with an auth token:\r\n\r\n`examples = load_dataset('facebook\/winoground', use_auth_token=)`\r\n\r\nLet me know if there are any issues","Adding the dataset loading script definitely didn't take as long as I thought it would \ud83d\ude05","killer"],"created_at":1649751376000,"updated_at":1651707638000,"closed_at":1651707638000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\nI am trying to use datasets to load winoground and I'm getting a JSON decoding error.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ntoken = 'hf_XXXXX' # my HF access token\r\ndatasets = load_dataset('facebook\/winoground', use_auth_token=token)\r\n```\r\n\r\n## Expected results\r\nI downloaded images.zip and examples.jsonl manually. I was expecting to have some trouble decoding json so I didn't use jsonlines but instead was able to get a complete set of 400 examples by doing\r\n```python\r\nimport json\r\n\r\nwith open('examples.jsonl', 'r') as f:\r\n examples = f.read().split('\\n')\r\n\r\n# Thinking this would error if the JSON is not utf-8 encoded\r\njson_data = [json.loads(x) for x in examples]\r\nprint(json_data[-1])\r\n```\r\nand I see\r\n```python\r\n{'caption_0': 'someone is overdoing it',\r\n 'caption_1': 'someone is doing it over',\r\n 'collapsed_tag': 'Relation',\r\n 'id': 399,\r\n 'image_0': 'ex_399_img_0',\r\n 'image_1': 'ex_399_img_1',\r\n 'num_main_preds': 1,\r\n 'secondary_tag': 'Morpheme-Level',\r\n 'tag': 'Scope, Preposition'}\r\n\r\n```\r\nso I'm not sure what's going on here honestly. The file `examples.jsonl` doesn't have non-UTF-8 encoded text.\r\n\r\n## Actual results\r\nDuring the split operation after downloading, datasets encounters an error in the JSON ([trace](https:\/\/gist.github.com\/odellus\/e55d390ca203386bf551f38e0c63a46b) abbreviated for brevity).\r\n```\r\ndatasets\/packaged_modules\/json\/json.py:144 in Json._generate_tables(self, files)\r\n...\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte\r\n```\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.18.4\r\n- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 7.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4149\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4149\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4148","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4148\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4148\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4148\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4148","id":1201169242,"node_id":"I_kwDODunzps5HmGNa","number":4148,"title":"fix confusing bleu metric example","user":{"login":"aizawa-naoki","id":6253193,"node_id":"MDQ6VXNlcjYyNTMxOTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6253193?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aizawa-naoki","html_url":"https:\/\/github.com\/aizawa-naoki","followers_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/followers","following_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/orgs","repos_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/repos","events_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aizawa-naoki\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649744306000,"updated_at":1649859394000,"closed_at":1649859394000,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nI would like to see the example in \"Metric Card for BLEU\" changed.\r\nThe 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma.\r\nThe BLEU score are calculated correctly, but it is difficult to understand, so it would be helpful if you could correct this.\r\n```\r\n>> predictions = [\r\n... [\"hello\", \"there\", \"general\", \"kenobi\", # <- no closing square bracket.\r\n... [\"foo\", \"bar\" \"foobar\"] # <- no comma between \"bar\" and \"foobar\"\r\n... ]\r\n>>> references = [\r\n... [[\"hello\", \"there\", \"general\", \"kenobi\"]],\r\n... [[\"foo\", \"bar\", \"foobar\"]]\r\n... ]\r\n>>> bleu = datasets.load_metric(\"bleu\")\r\n>>> results = bleu.compute(predictions=predictions, references=references)\r\n>>> print(results)\r\n{'bleu': 0.6370964381207871, ...\r\n```\r\n\r\n**Describe the solution you'd like**\r\n```\r\n>> predictions = [\r\n... [\"hello\", \"there\", \"general\", \"kenobi\", # <- no closing square bracket.\r\n... [\"foo\", \"bar\" \"foobar\"] # <- no comma between \"bar\" and \"foobar\"\r\n... ]\r\n# and\r\n>>> print(results)\r\n{'bleu':1.0, ...\r\n```\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4148\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4148\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4147","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4147\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4147\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4147\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4147","id":1200756008,"node_id":"PR_kwDODunzps42CtPl","number":4147,"title":"Adjust path to datasets tutorial in How-To","user":{"login":"NimaBoscarino","id":6765188,"node_id":"MDQ6VXNlcjY3NjUxODg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6765188?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NimaBoscarino","html_url":"https:\/\/github.com\/NimaBoscarino","followers_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/followers","following_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/orgs","repos_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/repos","events_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NimaBoscarino\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649726434000,"updated_at":1649752344000,"closed_at":1649751962000,"author_association":"MEMBER","active_lock_reason":null,"body":"The link in the How-To overview page to the Datasets tutorials is currently broken. This is just a small adjustment to make it match the format used in https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/tutorial.md.\r\n\r\n(Edit to add: The link in the PR deployment (https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4147\/en\/how_to) is also broken since it's actually hardcoded to `master` and not dynamic to the branch name, but other links seem to behave similarly.)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4147\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4147\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4147","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4147","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4147.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4147.patch","merged_at":1649751962000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4146","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4146\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4146\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4146\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4146","id":1200215789,"node_id":"I_kwDODunzps5Hidbt","number":4146,"title":"SAMSum dataset viewer not working","user":{"login":"aakashnegi10","id":39906333,"node_id":"MDQ6VXNlcjM5OTA2MzMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39906333?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aakashnegi10","html_url":"https:\/\/github.com\/aakashnegi10","followers_url":"https:\/\/api.github.com\/users\/aakashnegi10\/followers","following_url":"https:\/\/api.github.com\/users\/aakashnegi10\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aakashnegi10\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aakashnegi10\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aakashnegi10\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aakashnegi10\/orgs","repos_url":"https:\/\/api.github.com\/users\/aakashnegi10\/repos","events_url":"https:\/\/api.github.com\/users\/aakashnegi10\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aakashnegi10\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["https:\/\/huggingface.co\/datasets\/samsum\r\n\r\n```\r\nStatus code: 400\r\nException: ValueError\r\nMessage: Cannot seek streaming HTTP file\r\n```","Currently, only the datasets that can be streamed support the dataset viewer. Maybe @lhoestq @albertvillanova or @mariosasko could give more details about why the dataset cannot be streamed.","It looks like the host (https:\/\/arxiv.org) doesn't allow HTTP Range requests, which is what we use to stream data.\r\n\r\nThis can be fix if we host the data ourselves, which is ok since the dataset is under CC BY-NC-ND 4.0"],"created_at":1649694177000,"updated_at":1651249569000,"closed_at":1651249569000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for '*name of the dataset*'\r\n\r\n**Link:** *link to the dataset viewer page*\r\n\r\n*short description of the issue*\r\n\r\nAm I the one who added this dataset ? Yes-No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4146\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4146\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4145","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4145\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4145\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4145\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4145","id":1200209781,"node_id":"PR_kwDODunzps42A6Rt","number":4145,"title":"Redirect TIMIT download from LDC","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["CI is failing because some tags are outdated, but they're fixed in #4067 ","_The documentation is not available anymore as the PR was closed or merged._","We may do a release pretty soon (today ?), let me know if it's fine to include it in the new release","Fine to include this change!"],"created_at":1649693875000,"updated_at":1649864371000,"closed_at":1649863984000,"author_association":"MEMBER","active_lock_reason":null,"body":"LDC data is protected under US copyright laws and under various legal agreements between the Linguistic Data Consortium\/the University of Pennsylvania and data providers which prohibit redistribution of that data by anyone other than LDC. Similarly, LDC's membership agreements, non-member user agreement and various corpus-specific license agreements specifically state that users cannot publish, retransmit, disclose, copy, reproduce or redistribute LDC databases to others outside their organizations.\r\n\r\nLDC explicitly asked us to remove the download script for the TIMIT dataset. In this PR I remove all means to download the dataset, and redirect users to download the data from https:\/\/catalog.ldc.upenn.edu\/LDC93S1 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4145\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4145\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4145","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4145","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4145.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4145.patch","merged_at":1649863983000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4144","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4144\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4144\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4144\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4144","id":1200016983,"node_id":"PR_kwDODunzps42ARmu","number":4144,"title":"Fix splits in local packaged modules, local datasets without script and hub datasets without script","user":{"login":"polinaeterna","id":16348744,"node_id":"MDQ6VXNlcjE2MzQ4NzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16348744?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polinaeterna","html_url":"https:\/\/github.com\/polinaeterna","followers_url":"https:\/\/api.github.com\/users\/polinaeterna\/followers","following_url":"https:\/\/api.github.com\/users\/polinaeterna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polinaeterna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polinaeterna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polinaeterna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polinaeterna\/orgs","repos_url":"https:\/\/api.github.com\/users\/polinaeterna\/repos","events_url":"https:\/\/api.github.com\/users\/polinaeterna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polinaeterna\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","Thanks !\r\nI'm in favor of this change, even though it's a breaking change:\r\n\r\nif you had a dataset\r\n```\r\ndata\/\r\n train.csv\r\n test.csv\r\n```\r\n\r\nthen running this code would now return both train and test splits:\r\n```python\r\nload_dataset(\"csv\", data_dir=\"data\/\")\r\n```\r\nwhereas right now it returns only a train split with the data from both CSV files.\r\n\r\nIn my opinion it's ok do do this breaking change because:\r\n- it makes this behavior consistent with `load_dataset(\"path\/to\/data\")` that also returns both splits: data_files resolution must be the same\r\n- I don't expect too many affected users (unless people really wanted to group train and test images in the train split on purpose ?) compared to the many new users to come (especially with #4069 )\r\n- this usage will become more and more common as we add packaged builder and imagefolder\/audiofolder usage grows, so it may be better to do this change early\r\n\r\nLet me know if you think this is acceptable @mariosasko @albertvillanova or not, and if you think we need to first have a warning for some time before switching to this new behavior","Also, if people really want to put train and test, say, images in a single train split they could do \r\n`load_dataset(\"imagefolder\", data_files={\"train\": \"\/path\/to\/data\/**})`. Probably (arguably :)), if this is a more counterintuitive case, then it should require manual files specification, not a default one (in which we expect that users do want to infer splits from filenames \/ dir structure but currently they have to pass smth like `{\"train\": \"\/path\/to\/data\/train*\", \"test\": \"\/path\/to\/data\/test*\"}` explicitly as `data_files`) ","I also like this change, and I don't think we even need a warning during the transition period, considering I've been asked several times since the release of `imagefolder` why splits are not correctly inferred if the directory structure is as follows:\r\n```\r\ndata_dir\r\n train\r\n label_a\r\n 0.jpg\r\n ...\r\n label_b \r\n 0.jpg\r\n ...\r\n test\r\n label_a\r\n 0.jpg\r\n ...\r\n label_b \r\n 0.jpg\r\n ...\r\n```","Cool ! Feel free to add a test (maybe something similar to `test_PackagedDatasetModuleFactory_with_data_dir` but with a data_dir that contains several splits) and mark this PR as ready for review then @polinaeterna :)","@lhoestq @mariosasko do you think it's a good idea to do the same with `HubDatasetModuleFactoryWithoutScript` and `LocalDatasetModuleFactoryWithoutScript` (see the latest change). If we agree on the current change, doing \r\n```python\r\nds = load_dataset(\"polinaeterna\/jsonl_test\", data_dir=\"data\/\")\r\n```\r\non dataset with the following structure:\r\n```\r\ntrain.jsonl\r\ntest.jsonl\r\ndata\/\r\n train.jsonl\r\n test.jsonl\r\n```\r\nwill result in having two splits from files under `data\/` dir in specified repo, while master version returns a single train split. \r\nThe same would be for local dataset without script if doing smth like:\r\n```python\r\nds = load_dataset(\"\/home\/polina\/workspace\/repos\/jsonl_test\", data_dir=\"\/home\/polina\/workspace\/repos\/jsonl_test\/data\")\r\n```\r\n(though I'm not sure I understand this use case :D)\r\nLet me know if you think we should preserve the same logic for all factories or if I should roll back this change.","@lhoestq to test passing subdirectory (`base_path`) to data_files functions and methods, I extended the temporary test directory with data so that it contains subdirectory. Because of that the number of files in this directory increased, so I had to change some numbers and patterns to account for this change - [907ddf0](https:\/\/github.com\/huggingface\/datasets\/pull\/4144\/commits\/907ddf09d3afece5afbae18675c859d6e453f2bf)\r\n\r\nDo you think it's ok? Another option is to create another tmp dir and do all the checks inside it. "],"created_at":1649685453000,"updated_at":1651223534000,"closed_at":1651179765000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"fixes #4150\r\n\r\nI suggest to infer splits structure from files when `data_dir` is passed with `get_patterns_locally`, analogous to what's done in `LocalDatasetModuleFactoryWithoutScript` with `self.path`, instead of generating files with `data_dir\/**` patterns and putting them all into a single default (train) split.\r\n\r\nI would also suggest to align `HubDatasetModuleFactoryWithoutScript` and `LocalDatasetModuleFactoryWithoutScript` with this logic (remove `data_files = os.path.join(data_dir, \"**\")`). It's not reflected in the current code now as I'd like to discuss it cause I might be unaware of some use cases. @lhoestq @mariosasko @albertvillanova WDYT?","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4144\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4144\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4144","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4144","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4144.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4144.patch","merged_at":1651179764000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4143","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4143\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4143\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4143\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4143","id":1199937961,"node_id":"I_kwDODunzps5HhZmp","number":4143,"title":"Unable to download `Wikepedia` 20220301.en version","user":{"login":"beyondguo","id":37113676,"node_id":"MDQ6VXNlcjM3MTEzNjc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37113676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/beyondguo","html_url":"https:\/\/github.com\/beyondguo","followers_url":"https:\/\/api.github.com\/users\/beyondguo\/followers","following_url":"https:\/\/api.github.com\/users\/beyondguo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/beyondguo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/beyondguo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/beyondguo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/beyondguo\/orgs","repos_url":"https:\/\/api.github.com\/users\/beyondguo\/repos","events_url":"https:\/\/api.github.com\/users\/beyondguo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/beyondguo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! We've recently updated the Wikipedia script, so these changes are only available on master and can be fetched as follows:\r\n```python\r\ndataset_wikipedia = load_dataset(\"wikipedia\", \"20220301.en\", revision=\"master\")\r\n```","Hi, how can I load the previous \"20200501.en\" version of wikipedia which had been downloaded to the default path? Thanks!","@JiaQiSJTU just reinstall the previous verision of the package, e.g. `!pip install -q datasets==1.0.0`"],"created_at":1649682014000,"updated_at":1660696675000,"closed_at":1650560654000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nUnable to download `Wikepedia` dataset, 20220301.en version\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n!pip install apache_beam mwparserfromhell\r\ndataset_wikipedia = load_dataset(\"wikipedia\", \"20220301.en\")\r\n```\r\n\r\n## Actual results\r\n```\r\nValueError: BuilderConfig 20220301.en not found. \r\nAvailable: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.0.0\r\n- Platform: Ubuntu\r\n- Python version: 3.6\r\n- PyArrow version: 6.0.1","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4143\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4143\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4142","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4142\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4142\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4142\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4142","id":1199794750,"node_id":"I_kwDODunzps5Hg2o-","number":4142,"title":"Add ObjectFolder 2.0 dataset","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649674671000,"updated_at":1649674671000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** ObjectFolder 2.0\r\n- **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance.\r\n- **Paper:** [*link to the dataset paper if available*](https:\/\/arxiv.org\/abs\/2204.02389)\r\n- **Data:** https:\/\/github.com\/rhgao\/ObjectFolder\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4142\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4142\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4141","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4141\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4141\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4141\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4141","id":1199610885,"node_id":"I_kwDODunzps5HgJwF","number":4141,"title":"Why is the dataset not visible under the dataset preview section?","user":{"login":"Nid989","id":75028682,"node_id":"MDQ6VXNlcjc1MDI4Njgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75028682?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nid989","html_url":"https:\/\/github.com\/Nid989","followers_url":"https:\/\/api.github.com\/users\/Nid989\/followers","following_url":"https:\/\/api.github.com\/users\/Nid989\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nid989\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nid989\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nid989\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nid989\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nid989\/repos","events_url":"https:\/\/api.github.com\/users\/Nid989\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nid989\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649666202000,"updated_at":1649703332000,"closed_at":1649696989000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for '*name of the dataset*'\r\n\r\n**Link:** *link to the dataset viewer page*\r\n\r\n*short description of the issue*\r\n\r\nAm I the one who added this dataset ? Yes-No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4141\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4141\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4140","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4140\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4140\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4140\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4140","id":1199492356,"node_id":"I_kwDODunzps5Hfs0E","number":4140,"title":"Error loading arxiv data set","user":{"login":"yjqiu","id":5383918,"node_id":"MDQ6VXNlcjUzODM5MTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5383918?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjqiu","html_url":"https:\/\/github.com\/yjqiu","followers_url":"https:\/\/api.github.com\/users\/yjqiu\/followers","following_url":"https:\/\/api.github.com\/users\/yjqiu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjqiu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjqiu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjqiu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjqiu\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjqiu\/repos","events_url":"https:\/\/api.github.com\/users\/yjqiu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjqiu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I think this error may be related to using an older version of the library. I was able to load the dataset without any issues using the latest version of `datasets`. Can you upgrade to the latest version of `datasets` and try again? :)","Hi! As @stevhliu suggested, to fix the issue, update the lib to the newest version with:\r\n```\r\npip install -U datasets\r\n```\r\nand download the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset('scientific_papers', 'arxiv', download_mode=\"force_redownload\")\r\n```","Thanks for the quick response! It works now. The problem is that I used nlp. load_dataset instead of datasets. load_dataset."],"created_at":1649660794000,"updated_at":1649780648000,"closed_at":1649780648000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\nI met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`. \r\n```\r\nTraceback (most recent call last):\r\n File \"scripts\/summarization.py\", line 354, in \r\n main(args)\r\n File \"scripts\/summarization.py\", line 306, in main\r\n model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv')\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 549, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 463, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 522, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/utils\/info_utils.py\", line 38, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\nnlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/drive.google.com\/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https:\/\/drive.google.com\/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download']\r\n```\r\n\r\nI then tried to ignore verification steps by `ignore_verifications=True` and there is another error.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 537, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 810, in _prepare_split\r\n for key, record in utils.tqdm(generator, unit=\" examples\", total=split_info.num_examples, leave=False):\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/tqdm\/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/datasets\/scientific_papers\/9e4f2cfe3d8494e9f34a84ce49c3214605b4b52a3d8eb199104430d04c52cc12\/scientific_papers.py\", line 108, in _generate_examples\r\n with open(path, encoding=\"utf-8\") as f:\r\nNotADirectoryError: [Errno 20] Not a directory: '\/home\/username\/.cache\/huggingface\/datasets\/downloads\/c0deae7af7d9c87f25dfadf621f7126f708d7dcac6d353c7564883084a000076\/arxiv-dataset\/train.txt'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"scripts\/summarization.py\", line 354, in \r\n main(args)\r\n File \"scripts\/summarization.py\", line 306, in main\r\n model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv', ignore_verifications=True)\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 549, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 463, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/opt\/conda\/envs\/longformer\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 539, in _download_and_prepare\r\n raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\nOSError: Cannot find data file.\r\n```\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4140\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4140\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4139","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4139\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4139\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4139\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4139","id":1199443822,"node_id":"I_kwDODunzps5Hfg9u","number":4139,"title":"Dataset viewer issue for Winoground","user":{"login":"alcinos","id":7438704,"node_id":"MDQ6VXNlcjc0Mzg3MDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7438704?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alcinos","html_url":"https:\/\/github.com\/alcinos","followers_url":"https:\/\/api.github.com\/users\/alcinos\/followers","following_url":"https:\/\/api.github.com\/users\/alcinos\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alcinos\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alcinos\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alcinos\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alcinos\/orgs","repos_url":"https:\/\/api.github.com\/users\/alcinos\/repos","events_url":"https:\/\/api.github.com\/users\/alcinos\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alcinos\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"},{"id":4030248571,"node_id":"LA_kwDODunzps7wOLZ7","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer-gated","name":"dataset-viewer-gated","color":"51F745","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["related (same dataset): https:\/\/github.com\/huggingface\/datasets\/issues\/4149. But the issue is different. Looking at it","I thought this issue was related to the error I was seeing, but upon consideration I'd think the dataset viewer would return a 500 (unable to create the split like me) or a 404 (unable to load split b\/c it was never created) error if it was having the issue I was seeing in #4149. 401 message makes it look like dataset viewer isn't passing through the identity of the user who has signed the licensing agreement when making the request to GET [examples.jsonl](https:\/\/huggingface.co\/datasets\/facebook\/winoground\/resolve\/a86a60456fbbd242e9a744199071a6bd3e7fd9de\/examples.jsonl).","Pinging @SBrandeis, as it seems related to gated datasets and access tokens.","To replicate:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset= datasets.load_dataset('facebook\/winoground', name='facebook--winoground', split='train', use_auth_token=\"hf_app_...\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 439, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/packaged_modules\/json\/json.py\", line 85, in _generate_tables\r\n for file_idx, file in enumerate(files):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/streaming_download_manager.py\", line 679, in __iter__\r\n yield from self.generator(*self.args, **self.kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/streaming_download_manager.py\", line 731, in _iter_from_urlpaths\r\n for dirpath, _, filenames in xwalk(urlpath, use_auth_token=use_auth_token):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/utils\/streaming_download_manager.py\", line 623, in xwalk\r\n for dirpath, dirnames, filenames in fs.walk(main_hop):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/spec.py\", line 372, in walk\r\n listing = self.ls(path, detail=True, **kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/asyn.py\", line 85, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/asyn.py\", line 65, in sync\r\n raise return_result\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/implementations\/http.py\", line 196, in _ls\r\n out = await self._ls_real(url, detail=detail, **kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/implementations\/http.py\", line 150, in _ls_real\r\n self._raise_not_found_for_status(r, url)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/implementations\/http.py\", line 208, in _raise_not_found_for_status\r\n response.raise_for_status()\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/aiohttp\/client_reqrep.py\", line 1004, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https:\/\/huggingface.co\/datasets\/facebook\/winoground\/resolve\/a86a60456fbbd242e9a744199071a6bd3e7fd9de\/examples.jsonl')\r\n```\r\n\r\n*edited to fix `use_token` -> `use_auth_token`, thx @odellus*","~~Using your command to replicate and changing `use_token` to `use_auth_token` fixes the problem I was seeing in #4149.~~\r\nNevermind it gave me an iterator to a method returning the same 401s. Changing `use_token` to `use_auth_token` does not fix the issue.","After investigation with @severo , we found a potential culprit: https:\/\/github.com\/huggingface\/datasets\/blob\/3cd0a009a43f9f174056d70bfa2ca32216181926\/src\/datasets\/utils\/streaming_download_manager.py#L610-L624\r\n\r\nThe streaming manager does not seem to pass `use_auth_token` to `fsspec` when streaming and not iterating content of a zip archive\r\n\r\ncc @albertvillanova @lhoestq ","I was able to reproduce it on a private dataset, let me work on a fix","Hey @lhoestq, Thanks for working on a fix! Any plans to merge #4173 into master? ","Thanks for the heads up, I still need to fix some tests that are failing in the CI before merging ;)","The fix has been merged, we'll do a new release soon, and update the dataset viewer","Fixed, thanks!\r\n\"Capture\r\n"],"created_at":1649657501000,"updated_at":1655829838000,"closed_at":1655829838000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for 'Winoground'\r\n\r\n**Link:** [*link to the dataset viewer page*](https:\/\/huggingface.co\/datasets\/facebook\/winoground\/viewer\/facebook--winoground\/train)\r\n\r\n*short description of the issue*\r\nGetting 401, message='Unauthorized'\r\nThe dataset is subject to authorization, but I can access the files from the interface, so I assume I'm granted to access it. I'd assume the permission somehow doesn't propagate to the dataset viewer tool.\r\n\r\nAm I the one who added this dataset ? No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4139\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4139\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4138","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4138\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4138\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4138\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4138","id":1199291730,"node_id":"I_kwDODunzps5He71S","number":4138,"title":"Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()","user":{"login":"iluvvatar","id":55381086,"node_id":"MDQ6VXNlcjU1MzgxMDg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55381086?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iluvvatar","html_url":"https:\/\/github.com\/iluvvatar","followers_url":"https:\/\/api.github.com\/users\/iluvvatar\/followers","following_url":"https:\/\/api.github.com\/users\/iluvvatar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iluvvatar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iluvvatar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iluvvatar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iluvvatar\/orgs","repos_url":"https:\/\/api.github.com\/users\/iluvvatar\/repos","events_url":"https:\/\/api.github.com\/users\/iluvvatar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iluvvatar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To reproduce:\r\n\r\n```python\r\n>>> import datasets\r\n>>> datasets.get_dataset_split_names('MalakhovIlya\/RuREBus', config_name='raw_txt')\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MalakhovIlya--RuREBus\/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed\/RuREBus.py\", line 101, in _split_generators\r\n decode_file_names(folder)\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/MalakhovIlya--RuREBus\/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed\/RuREBus.py\", line 26, in decode_file_names\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/streaming.py\", line 66, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\nTypeError: xwalk() got an unexpected keyword argument 'topdown'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nIt's not related to the dataset viewer. Maybe @albertvillanova or @lhoestq could help more on this issue.","Hi! This issue stems from the fact that `xwalk`, which is a streamable version of `os.walk`, doesn't support the `topdown` param due to `fsspec`'s `walk` also not supporting it, so fixing this issue could be tricky. \r\n\r\n@MalakhovIlyaPavlovich You can avoid the error by tweaking your data processing and not using this param. (and `Path.rename`, which also cannot be streamed) ","@mariosasko thank you for your reply. I couldn't reproduce error showed by @severo either on Ubuntu 20.04.3 LTS, Windows 10 and Google Colab environments. But trying to avoid using os.walk(topdown=False) and Path.rename(), In _split_generators I replaced\r\n```\r\ndef decode_file_names(folder):\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n root = Path(root)\r\n for file in files:\r\n old_name = root \/ Path(file)\r\n new_name = root \/ Path(\r\n file.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n for dir in dirs:\r\n old_name = root \/ Path(dir)\r\n new_name = root \/ Path(dir.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\ndecode_file_names(folder)\r\n```\r\nby\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent \/ 'extracted' \/ p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('\/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nif not is_url(zip_file):\r\n folder = extract(zip_file)\r\nelse:\r\n folder = None\r\n```\r\nand now everything works well except data viewer for \"raw_txt\" subset: dataset preview on hub shows \"No data.\". As far as I understand dl_manager.download returns original URL when we are calling datasets.get_dataset_split_names and my suspicions are that dataset viewer can do smth similar. I couldn't find information about how it works. I would be very grateful, if you could tell me how to fix this)","This is what I get when I try to stream the `raw_txt` subset:\r\n```python\r\n>>> dset = load_dataset(\"MalakhovIlya\/RuREBus\", \"raw_txt\", split=\"raw_txt\", streaming=True)\r\n>>> next(iter(dset))\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\nStopIteration\r\n```\r\nSo there is a bug in your script.","streaming=True helped me to find solution. I fixed\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent \/ 'extracted' \/ p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('\/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nfolder = extract(zip_file)\r\n```\r\nby \r\n```\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\npath = os.path.join(folder, 'MED_txt\/unparsed_txt')\r\nfor root, dirs, files in os.walk(path):\r\n decoded_root_name = Path(root).name.encode('cp437').decode('cp866')\r\n```\r\n@mariosasko thank you for your help :)"],"created_at":1649642833000,"updated_at":1650338146000,"closed_at":1650123989000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for 'MalakhovIlya\/RuREBus'\r\n\r\n**Link:** https:\/\/huggingface.co\/datasets\/MalakhovIlya\/RuREBus\r\n\r\n**Description**\r\nUsing os.walk(topdown=False) in DatasetBuilder causes following error:\r\nStatus code: 400\r\nException: TypeError\r\nMessage: xwalk() got an unexpected keyword argument 'topdown'\r\nCouldn't find where \"xwalk\" come from. How can I fix this?\r\n\r\nAm I the one who added this dataset ? Yes\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4138\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4138\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4137","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4137\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4137\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4137\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4137","id":1199000453,"node_id":"PR_kwDODunzps419D6A","number":4137,"title":"Add single dataset citations for TweetEval","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The `test_dataset_cards` method is failing with the error:\r\n\r\n```\r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE The following typing errors are found: {'annotations_creators': \"(Expected `typing.List` with length > 0. Found value of type: ``, with length: 0.\\n)\\nOR\\n(Expected `typing.Dict` with length > 0. Found value of type: ``, with length: 0.\\n)\"}\r\n```\r\n\r\nAdding `found` as annotation creators."],"created_at":1649591514000,"updated_at":1649750242000,"closed_at":1649749875000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds single data citations as per request of the original creators of the TweetEval dataset.\r\n\r\nThis is a recent email from the creator:\r\n\r\n> Could I ask you a favor? Would you be able to add at the end of the README the citations of the single datasets as well? You can just copy our readme maybe? https:\/\/github.com\/cardiffnlp\/tweeteval#citing-tweeteval\r\n(just to be sure that the creator of the single datasets also get credits when tweeteval is used)\r\n\r\nPlease let me know if this looks okay or if any changes are needed.\r\n\r\nThanks,\r\nGunjan\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4137\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4137\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4137","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4137","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4137.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4137.patch","merged_at":1649749875000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4135","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4135\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4135\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4135\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4135","id":1198307610,"node_id":"PR_kwDODunzps416-Rn","number":4135,"title":"Support streaming xtreme dataset for PAN-X config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649485188000,"updated_at":1651826380000,"closed_at":1649660054000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming xtreme dataset for PAN-X config.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4135\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4135\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4135","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4135","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4135.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4135.patch","merged_at":1649660054000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4134","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4134\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4134\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4134\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4134","id":1197937146,"node_id":"I_kwDODunzps5HZxH6","number":4134,"title":"ELI5 supporting documents","user":{"login":"Slayer-007","id":69015896,"node_id":"MDQ6VXNlcjY5MDE1ODk2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/69015896?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Slayer-007","html_url":"https:\/\/github.com\/Slayer-007","followers_url":"https:\/\/api.github.com\/users\/Slayer-007\/followers","following_url":"https:\/\/api.github.com\/users\/Slayer-007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Slayer-007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Slayer-007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Slayer-007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Slayer-007\/orgs","repos_url":"https:\/\/api.github.com\/users\/Slayer-007\/repos","events_url":"https:\/\/api.github.com\/users\/Slayer-007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Slayer-007\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Please post your question on the [forum](https:\/\/discuss.huggingface.co\/), more people will be able to help you there ;)"],"created_at":1649460987000,"updated_at":1649857966000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs??","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4134\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4134\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4133","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4133\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4133\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4133\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4133","id":1197830623,"node_id":"I_kwDODunzps5HZXHf","number":4133,"title":"HANS dataset preview broken","user":{"login":"pietrolesci","id":61748653,"node_id":"MDQ6VXNlcjYxNzQ4NjUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/61748653?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pietrolesci","html_url":"https:\/\/github.com\/pietrolesci","followers_url":"https:\/\/api.github.com\/users\/pietrolesci\/followers","following_url":"https:\/\/api.github.com\/users\/pietrolesci\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pietrolesci\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pietrolesci\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pietrolesci\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pietrolesci\/orgs","repos_url":"https:\/\/api.github.com\/users\/pietrolesci\/repos","events_url":"https:\/\/api.github.com\/users\/pietrolesci\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pietrolesci\/received_events","type":"User","site_admin":false},"labels":[{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The dataset cannot be loaded, be it in normal or streaming mode.\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/hans\/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac\/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/spec.py\", line 1595, in __next__\r\n out = self.readline()\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/spec.py\", line 1592, in readline\r\n return self.readuntil(b\"\\n\")\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/spec.py\", line 1581, in readuntil\r\n self.seek(start + found + len(char))\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/fsspec\/implementations\/http.py\", line 676, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=False)\r\nDownloading and preparing dataset hans\/plain_text (download: 29.51 MiB, generated: 30.34 MiB, post-processed: Unknown size, total: 59.85 MiB) to \/home\/slesage\/.cache\/huggingface\/datasets\/hans\/plain_text\/1.0.0\/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1087, in _prepare_split\r\n for key, record in logging.tqdm(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.9\/site-packages\/tqdm\/std.py\", line 1180, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/hans\/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac\/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\nValueError: readline of closed file\r\n```\r\n\r\n","Hi! I've opened a PR that should make this dataset stremable. You can test it as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"hans\", split=\"train\", streaming=True, revision=\"49decd29839c792ecc24ac88f861cbdec30c1c40\")\r\n```\r\n\r\n@severo The current script doesn't throw an error in normal mode (only in streaming mode) on my local machine or in Colab. Can you update your installation of `datasets` and see if that fixes the issue?","Thanks for this. It works well, thanks! The dataset viewer is using https:\/\/github.com\/huggingface\/datasets\/releases\/tag\/2.0.0, I'm eager to upgrade to 2.0.1 \ud83d\ude09"],"created_at":1649451975000,"updated_at":1649851054000,"closed_at":1649851054000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for '*hans*'\r\n\r\n**Link:** [https:\/\/huggingface.co\/datasets\/hans](https:\/\/huggingface.co\/datasets\/hans)\r\n\r\nHANS dataset preview is broken with error 400\r\n\r\nAm I the one who added this dataset ? No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4133\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4133\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4132","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4132\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4132\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4132\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4132","id":1197661720,"node_id":"PR_kwDODunzps41460R","number":4132,"title":"Support streaming xtreme dataset for PAWS-X config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649442332000,"updated_at":1651826382000,"closed_at":1649451764000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming xtreme dataset for PAWS-X config.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4132\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4132\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4132","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4132","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4132.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4132.patch","merged_at":1649451764000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4131","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4131\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4131\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4131\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4131","id":1197472249,"node_id":"PR_kwDODunzps414Zt1","number":4131,"title":"Support streaming xtreme dataset for udpos config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649431849000,"updated_at":1651826386000,"closed_at":1649435287000,"author_association":"MEMBER","active_lock_reason":null,"body":"Support streaming xtreme dataset for udpos config.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4131\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4131\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4131","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4131","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4131.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4131.patch","merged_at":1649435287000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4130","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4130\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4130\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4130\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4130","id":1197456857,"node_id":"PR_kwDODunzps414Wqx","number":4130,"title":"Add SBU Captions Photo Dataset","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649431059000,"updated_at":1649760451000,"closed_at":1649760089000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4130\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4130\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4130","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4130","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4130.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4130.patch","merged_at":1649760089000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4129","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4129\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4129\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4129\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4129","id":1197376796,"node_id":"I_kwDODunzps5HXoUc","number":4129,"title":"dataset metadata for reproducibility","user":{"login":"nbroad1881","id":24982805,"node_id":"MDQ6VXNlcjI0OTgyODA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24982805?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nbroad1881","html_url":"https:\/\/github.com\/nbroad1881","followers_url":"https:\/\/api.github.com\/users\/nbroad1881\/followers","following_url":"https:\/\/api.github.com\/users\/nbroad1881\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nbroad1881\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nbroad1881\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nbroad1881\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nbroad1881\/orgs","repos_url":"https:\/\/api.github.com\/users\/nbroad1881\/repos","events_url":"https:\/\/api.github.com\/users\/nbroad1881\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nbroad1881\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649427448000,"updated_at":1649427448000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"When pulling a dataset from the hub, it would be useful to have some metadata about the specific dataset and version that is used. The metadata could then be passed to the `Trainer` which could then be saved to a model card. This is useful for people who run many experiments on different versions (commits\/branches) of the same dataset. \r\n\r\nThe dataset could have a list of \u201csource datasets\u201d metadata and ignore what happens to them before arriving in the Trainer (i.e. ignore mapping, filtering, etc.).\r\n\r\nHere is a basic representation (made by @lhoestq )\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> \r\n>>> my_dataset = load_dataset(...)[\"train\"]\r\n>>> my_dataset = my_dataset.map(...)\r\n>>> \r\n>>> my_dataset.sources\r\n[HFHubDataset(repo_id=..., revision=..., arguments={...})]\r\n```","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4129\/reactions","total_count":4,"+1":4,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4129\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4128","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4128\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4128\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4128\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4128","id":1197326311,"node_id":"PR_kwDODunzps4138I6","number":4128,"title":"More robust `cast_to_python_objects` in `TypedSequence`","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649424815000,"updated_at":1649858861000,"closed_at":1649858476000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adds a fallback to run an expensive version of `cast_to_python_objects` which exhaustively checks entire lists to avoid the `ArrowInvalid: Could not convert` error in `TypedSequence`. Currently, this error can happen in situations where only some images are decoded in `map`, in which case `cast_to_python_objects` fails to recognize that it needs to cast `PIL.Image` objects if they are not at the beginning of the sequence and stops after the first image dictionary (e.g., if `data` is `[{\"bytes\": None, \"path\": \"some path\"}, PIL.Image(), ...]`)\r\n\r\nFix #4124","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4128\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4128\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4128","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4128","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4128.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4128.patch","merged_at":1649858476000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4127","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4127\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4127\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4127\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4127","id":1197297756,"node_id":"PR_kwDODunzps4132EN","number":4127,"title":"Add configs with processed data in medical_dialog dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649423296000,"updated_at":1651826390000,"closed_at":1649434851000,"author_association":"MEMBER","active_lock_reason":null,"body":"There exist processed data files that do not require parsing the raw data files (which can take long time).\r\n\r\nFix #4122.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4127\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4127\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4127","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4127","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4127.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4127.patch","merged_at":1649434851000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4126","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4126\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4126\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4126\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4126","id":1196665194,"node_id":"I_kwDODunzps5HU6lq","number":4126,"title":"dataset viewer issue for common_voice","user":{"login":"laphang","id":24724502,"node_id":"MDQ6VXNlcjI0NzI0NTAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24724502?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/laphang","html_url":"https:\/\/github.com\/laphang","followers_url":"https:\/\/api.github.com\/users\/laphang\/followers","following_url":"https:\/\/api.github.com\/users\/laphang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/laphang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/laphang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/laphang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/laphang\/orgs","repos_url":"https:\/\/api.github.com\/users\/laphang\/repos","events_url":"https:\/\/api.github.com\/users\/laphang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/laphang\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"},{"id":4027368468,"node_id":"LA_kwDODunzps7wDMQU","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/audio_column","name":"audio_column","color":"F83ACF","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Yes, it's a known issue, and we expect to fix it soon.","Fixed.\r\n\r\n\"Capture\r\n"],"created_at":1649374468000,"updated_at":1650894137000,"closed_at":1650894136000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for 'common_voice'\r\n\r\n**Link:** https:\/\/huggingface.co\/datasets\/common_voice\r\n\r\nServer Error\r\nStatus code: 400\r\nException: TypeError\r\nMessage: __init__() got an unexpected keyword argument 'audio_column'\r\n\r\nAm I the one who added this dataset ? No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4126\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4126\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4125","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4125\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4125\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4125\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4125","id":1196633936,"node_id":"PR_kwDODunzps411qeR","number":4125,"title":"BIG-bench","user":{"login":"andersjohanandreassen","id":43357549,"node_id":"MDQ6VXNlcjQzMzU3NTQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43357549?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/andersjohanandreassen","html_url":"https:\/\/github.com\/andersjohanandreassen","followers_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/followers","following_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/orgs","repos_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/repos","events_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/andersjohanandreassen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> It looks like the CI is failing on windows because our windows CI is unable to clone the bigbench repository (maybe it has to do with filenames that are longer than 256 characters, which windows don't like). Could the smaller installation of bigbench via pip solve this issue ?\r\n> Otherwise we can see how to remove this limitation in our windows CI.\r\n\r\nI'm not sure.\r\nIf it's git's fault that it can't handle the long filenames, it will possibly be resolved by the pip install. If it's an issue with windows not liking long filenames after it's installed, then it will not be resolved.\r\nI don't have a windows computer to try it on, but I might be able to tweek this PR and do an experiment to find out. \r\nWe're waiting for a quota increase for the pip install (https:\/\/github.com\/pypa\/pypi-support\/issues\/1782). It's been pending for 2-3 weeks, and I don't have an estimate for when it will be resolved. \r\n\r\n>Regarding the dummy data zip files, I think we can just keep datasets\/bigbench\/dummy\/abstract_narrative_understanding\/1.0.0\/dummy_data.zip and remove all the other ones. We just require to have at least one dummy_data.zip file.\r\n\r\nSounds great. I will trim that down. ","Do you know what are the other tests dependencies that have conflicts with bigbench ? I can try to split the CI to end up with a compatible list of test dependencies","Hi @lhoestq,\r\n\r\nI haven't played with eliminating requirements form the test dependencies, and I've been trying to resolve this by modifying the bigbench repo to become compatible. \r\nIn the original bigbench repo, the version requirements were strict, and specifically it had a datasets==1.17.0 requirement which was causing trouble. \r\nI'm working on PR https:\/\/github.com\/google\/BIG-bench\/pull\/766 to get some more flexible versions that might be compatible with the test dependencies in HF\/datasets.\r\nWe're somewhat flexible in modifying these version numbers if we can figure out what the exact conflict is. \r\n\r\nI've spent some time experimenting with different versions, but I don't have a very efficient way of doing this debugging on my work computer (which for some reason doesn't produce the same sets of errors running python 3.9 instead of 3.6 or 3.7 in the tests). \r\nIt currently fails at \r\n> The conflict is caused by:\r\n> bert-score 0.3.6 depends on matplotlib\r\n> big-bench 0.0.1 depends on matplotlib<4.0 and >=3.5.1\r\n\r\nwhich doesn't seem like it can be the real issue. \r\n\r\nIf you have any advice for how to resolve these conflicts, that would be greatly appreciated!","Hi again @lhoestq, \r\nAfter some more or less random guessing of conflicting packages, I've managed to find a configuration that seems to be compatible with HF\/datasets. \r\n\r\nThe errors went away after removing version limits on matplotlib and scipy, and loosening numpy from 1.19 -> 1.17 in the bigbench requirements. \r\n\r\nI might do some more tweaking to see if it lets me set some minimal limits on matplotlib and scipy, but I think we at least can move forward.\r\n\r\nThe WIN tests are still failing, now because of \r\n\r\n> Did not find path entry C:\\tools\\miniconda3\\bin\r\n>C:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n\r\nI have no way of debugging this locally, and unless there's some way to get more verbose logs, I don't know why it's not finding pytest. Would you be able to take a quick look? \r\n\r\nUpdate: Actually, I see it's still failing because of the long filenames. So perhaps the pytest error is just because the previous steps failed. ","One more update on the WIN errors. \r\nI think all the long filenames are in files in the github repo that does not need to be included. \r\nWe will try to remove them .","Hi ! The remaining error seems to be a `UnicodeDecodeError` from `setup.py`. I think you can fix your setup.py:\r\n```diff\r\n- with open(os.path.join(os.path.dirname(__file__), fname)) as f:\r\n+ with open(os.path.join(os.path.dirname(__file__), fname), encoding=\"utf-8\") as f:\r\n```\r\nIndeed on windows, when you `open` a file it doesn't always use \"utf-8\" by default","Hi @lhoestq, \r\nThe dependency issues seems to now be resolved \ud83c\udf89 \r\n\r\nNow, the WIN tests are failing at\r\n> ERROR tests\/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - botocore...\r\n> ERROR tests\/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - botocore...\r\n\r\nIs this testing the dummy dataset that's added in bigbench? If so, I might need some help getting the right format in.\r\n\r\nThe error message I'm seeing is \r\n> raise EndpointConnectionError(endpoint_url=request.url, error=e)\r\n> E botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: \"http:\/\/127.0.0.1:5555\/test\"\r\n\r\nWhich seems unrelated, but perhaps the real issue is somewhere I'm not seeing? ","Woohoo awesome !\r\n\r\nLet me check the CI error","Can you try to re-run the CI, just in case CircleCI messed up ?","Hi @lhoestq, \r\nRerunning did not seem to solve the problem. \r\nThe `test_dummy_dataset_serialize_s3` error still seems to remain.","Hi again @lhoestq, \r\nI'm not sure if this is informative or not in terms of debugging, but I deleted the dummy data and the errors for windows still fail and the others still pass. \r\nDo you have any idea what could be causing this error on windows?","_The documentation is not available anymore as the PR was closed or merged._","Now the last question: let's have the dataset under`google\/bigbench` @andersjohanandreassen ?\r\n\r\nI think it would be nicer, this way you and anyone in your team can update the dataset card whevener you want without going through a github PR. You just need to join the https:\/\/huggingface.co\/google page using your google email :)","Hi @lhoestq, \r\n\r\nThank you so much for the help! I really appreciate it!!!\r\n\r\nAfter some discussion with the other bigbench organizers, I think there is a slight preference for bigbench to not be under google\/bigbench since this is a collaboration with researchers from many different institutions\/organizations beyond Google. \r\n\r\nI see the drawback with the updates to the dataset card having to go through a PR, but hopefully that won't be very frequent. \r\n\r\nWe're finalizing putting the bigbench api on pip, so once that's finalized I just need to update the setup.py with the correct dependency and I think we are ready to merge. ","Ok perfect, thank you !","I noticed that in the latest windows CI run it takes forever to install the dependencies, was there any change in the bigbench dependencies recently ?","oh, sorry! I just did a double check on the dependencies, and it seems like there is at least one left that should have been removed. There's also one new one added. \r\nLet me get those removed again. Will ping you here when it's updated. ","It looks like there is a circular dependency in `bigbench` at https:\/\/storage.googleapis.com\/public_research_data\/bigbench\/bigbench-0.0.1.tar.gz\r\n\r\n```python\r\n>>> import bigbench.api.util as bb_utils\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/bigbench\/api\/util.py\", line 29, in \r\n import bigbench.models.query_logging_model as query_logging_model\r\n File \"\/home\/circleci\/.pyenv\/versions\/3.6.15\/lib\/python3.6\/site-packages\/bigbench\/models\/query_logging_model.py\", line 23, in \r\n import bigbench.api.util as util\r\nAttributeError: module 'bigbench.api' has no attribute 'util'\r\n```","Hi @lhoestq , \r\nI think we are ready to merge! \r\n\r\nI have one minor question that I haven't been able to figure out: \r\nIs there a way to bypass the `verify_infos` from triggering? I have `max_examples` as an argument to allow for selecting a fixed subset of the datasets (some of the tasks have *very* many examples). But this is a variable that's not specified by the configs, so it raises an `NonMatchingSplitsSizesError`.\r\nI wasn't able to work my way around this, but perhaps there is a way to bypass this that I'm not seeing?\r\nIf this cannot be done, I'm happy to ignore this for now.\r\n\r\nRegarding pypi, we are working on a release there, but I'm told there is some issue that there is a problem regarding the upload, and we are not sure when it will be resolved, and it's not in my control. \r\nI think merging this PR with the GCS is a great idea, and I will open a new PR when the pypi version is ready. ","Cool ! Merging then :D\r\n\r\n> Is there a way to bypass the verify_infos from triggering? I have max_examples as an argument to allow for selecting a fixed subset of the datasets (some of the tasks have very many examples). But this is a variable that's not specified by the configs, so it raises an NonMatchingSplitsSizesError.\r\n\r\nThis is a bug, I opened an issue [here](https:\/\/github.com\/huggingface\/datasets\/issues\/4462). It should be easy to fix :)","The bigbench page is available here ! https:\/\/huggingface.co\/datasets\/bigbench\r\n\r\nI think we can update the dataset viewer to install bigbench on it, but since this is production code I'd rather use the version on pypi for bigbench when it comes out"],"created_at":1649370810000,"updated_at":1654711068000,"closed_at":1654709552000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds all BIG-bench json tasks to huggingface\/datasets. ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4125\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4125\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4125","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4125","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4125.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4125.patch","merged_at":1654709552000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4124","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4124\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4124\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4124\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4124","id":1196469842,"node_id":"I_kwDODunzps5HUK5S","number":4124,"title":"Image decoding often fails when transforming Image datasets","user":{"login":"RafayAK","id":17025191,"node_id":"MDQ6VXNlcjE3MDI1MTkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17025191?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/RafayAK","html_url":"https:\/\/github.com\/RafayAK","followers_url":"https:\/\/api.github.com\/users\/RafayAK\/followers","following_url":"https:\/\/api.github.com\/users\/RafayAK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/RafayAK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/RafayAK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/RafayAK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/RafayAK\/orgs","repos_url":"https:\/\/api.github.com\/users\/RafayAK\/repos","events_url":"https:\/\/api.github.com\/users\/RafayAK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/RafayAK\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["A quick hack I have found is that we can call the image first before running the transforms and it makes sure the image is decoded before being passed on.\r\n\r\nFor this I just needed to add `example['img'] = example['img']` to the top of my `generate_flipped_data` function, defined above, so that image decode in invoked.\r\n\r\nAfter this minor change this function works:\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n example['img'] = example['img'] # <<< This is the only change\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n```","Hi @RafayAK, thanks for reporting.\r\n\r\nCurrent implementation of the Image feature performs the decoding only if the \"img\" field is accessed by the mapped function.\r\n\r\nIn your original `generate_flipped_data` function:\r\n- it only accesses the \"img\" field (and thus performs decoding) if `rng.random() > p`;\r\n- on the other hand, for the cases where `rng.random() <= p`, the \"img\" field is not accessed and thus no decoding is performed for those examples\r\n\r\nBy adding the code line `example['img'] = example['img']`, you make sure the \"img\" field is accessed in all cases, and the decoding is done for all examples.\r\n\r\nAlso note that there is a little bug in your implementation: `p` is not the probability of flipping, but the probability of not-flipping; the larger is `p`, the smaller is the probability of flipping.\r\n\r\nSome refactoring (fixing also `p`):\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down.\r\n\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n do_flip = rng.random() < p # Note the \"<\" sign here instead of \">\"\r\n example['img'] = example['img'].transpose(1) if do_flip else example['img'] # Note \"img\" is always accessed\r\n example['is_flipped'] = 1 if do_flip else 0\r\n return example","@albertvillanova Thanks for letting me know this is intended behavior. The docs are severely lacking on this, if I hadn't posted this here I would have never found out how I'm actually supposed to modify images in a Dataset object.","@albertvillanova Secondly if you check the error message it shows that around 1999 images were successfully created, I'm pretty sure some of them were also flipped during the process. Back to my main contention, sometimes the decoding takes place other times it fails. \r\n\r\nI suppose to run `map` on any dataset all the examples should be invoked even if on some of them we end up doing nothing, is that right?","Hi @RafayAK! I've opened a PR with the fix, which adds a fallback to reattempt casting to PyArrow format with a more robust (but more expensive) procedure if the first attempt fails. Feel free to test it by installing `datasets` from the PR branch with the following command:\r\n```\r\npip install git+https:\/\/github.com\/huggingface\/datasets.git@fix-4124\r\n```","@mariosasko I'll try this right away and report back.","@mariosasko Thanks a lot for looking into this, now the `map` function at least behaves as one would expect a function to behave. \r\n\r\nLooking forward to exploring Hugging Face more and even contributing \ud83d\ude03.\r\n\r\n```bash\r\n $ conda list | grep datasets\r\ndatasets 2.0.1.dev0 pypi_0 pypi\r\n\r\n```\r\n\r\n```python\r\ndef preprocess_data(dataset):\r\n \"\"\"\r\n Helper funtion to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and\r\n add is_flipped column\r\n Args:\r\n dataset: HuggingFace CIFAR-100 Dataset Object\r\n\r\n Returns:\r\n new_dataset: A Dataset object with \"img\" and \"is_flipped\" columns only\r\n\r\n \"\"\"\r\n # remove fine_label and coarse_label columns\r\n new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])\r\n # add the column for is_flipped\r\n new_dataset = new_dataset.add_column(name=\"is_flipped\", column=np.zeros((len(new_dataset)), dtype=np.uint8))\r\n\r\n return new_dataset\r\n\r\n\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n # example['img'] = example['img']\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n\r\nmy_test = preprocess_data(test_dataset)\r\nmy_test = my_test.map(generate_flipped_data)\r\n```\r\n\r\nThe output now show the function was applied successfully:\r\n``` bash\r\n\/home\/rafay\/anaconda3\/envs\/pytorch_new\/bin\/python \/home\/rafay\/Documents\/you_only_live_once\/upside_down_detector\/create_dataset.py\r\nDownloading builder script: 5.61kB [00:00, 3.16MB\/s] \r\nDownloading metadata: 4.21kB [00:00, 2.56MB\/s] \r\nReusing dataset cifar100 (\/home\/rafay\/.cache\/huggingface\/datasets\/cifar100\/cifar100\/1.0.0\/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\nReusing dataset cifar100 (\/home\/rafay\/.cache\/huggingface\/datasets\/cifar100\/cifar100\/1.0.0\/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000\/10000 [00:01<00:00, 5149.15ex\/s]\r\n```\r\n"],"created_at":1649359045000,"updated_at":1649858476000,"closed_at":1649858476000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nWhen transforming\/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.\r\n\r\nUsing a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image passed around is still raw bytes:\r\n```\r\n[{'bytes': b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x00 \\x00\\x00\\x00 \\x08\\x02\\x00\\x00\\x00\\xfc\\x18\\xed\\xa3\\x00\\x00\\x08\\x02IDATx\\x9cEVIs[\\xc7\\x11\\xeemf\\xde\\x82\\x8d\\x80\\x08\\x89\"\\xb5V\\\\\\xb6\\x94(\\xe5\\x9f\\x90\\xca5\\x7f$\\xa7T\\xe5\\x9f&9\\xd9\\x8a\\\\.\\xdb\\xa4$J\\xa4\\x00\\x02x\\xc0{\\xb3t\\xe7\\x00\\xca\\x99\\xd3\\\\f\\xba\\xba\\xbf\\xa5?|\\xfa\\xf4\\xa2\\xeb\\xba\\xedv\\xa3f^\\xf8\\xd5\\x0bY\\xb6\\x10\\xb3\\xaaDq\\xcd\\x83\\x87\\xdf5\\xf3gZ\\x1a\\x04\\x0f\\xa0fp\\xfa\\xe0\\xd4\\x07?\\x9dN\\xc4\\xb1\\x99\\xfd\\xf2\\xcb\/\\x97\\x97\\x97H\\xa2\\xaaf\\x16\\x82\\xaf\\xeb\\xca{\\xbf\\xd9l.\\xdf\\x7f\\xfa\\xcb_\\xff&\\x88\\x08\\x00\\x80H\\xc0\\x80@.;\\x0f\\x8c@#v\\xe3\\xe5\\xfc\\xd1\\x9f\\xee6q\\xbf\\xdf\\xa6\\x14\\'\\x93\\xf1\\xc3\\xe5\\xe3\\xd1x\\x14c\\x8c1\\xa5\\x1c\\x9dsM\\xd3\\xb4\\xed\\x08\\x89SJ)\\xa5\\xedv\\xbb^\\xafNO\\x97D\\x84Hf .... \r\n```\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset, Dataset\r\nimport numpy as np\r\n# seeded NumPy random number generator for reprodducinble results.\r\nrng = np.random.default_rng(seed=0)\r\n\r\ntest_dataset = load_dataset('cifar100', split=\"test\")\r\n\r\ndef preprocess_data(dataset):\r\n \"\"\"\r\n Helper function to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and\r\n add is_flipped column\r\n Args:\r\n dataset: HuggingFace CIFAR-100 Dataset Object\r\n\r\n Returns:\r\n new_dataset: A Dataset object with \"img\" and \"is_flipped\" columns only\r\n\r\n \"\"\"\r\n # remove fine_label and coarse_label columns\r\n new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])\r\n # add the column for is_flipped\r\n new_dataset = new_dataset.add_column(name=\"is_flipped\", column=np.zeros((len(new_dataset)), dtype=np.uint8))\r\n\r\n return new_dataset\r\n\r\n\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping function that transforms some of the images up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: the probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n # example['img'] = example['img']\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n\r\nmy_test = preprocess_data(test_dataset)\r\nmy_test = my_test.map(generate_flipped_data)\r\n\r\n```\r\n\r\n## Expected results\r\nThe dataset should be transformed without problems.\r\n\r\n## Actual results\r\n```\r\n\/home\/rafay\/anaconda3\/envs\/pytorch_new\/bin\/python \/home\/rafay\/Documents\/you_only_live_once\/upside_down_detector\/create_dataset.py\r\nReusing dataset cifar100 (\/home\/rafay\/.cache\/huggingface\/datasets\/cifar100\/cifar100\/1.0.0\/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\nReusing dataset cifar100 (\/home\/rafay\/.cache\/huggingface\/datasets\/cifar100\/cifar100\/1.0.0\/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\n 20%|\u2588\u2589 | 1999\/10000 [00:00<00:01, 5560.44ex\/s]\r\nTraceback (most recent call last):\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 2326, in _map_single\r\n writer.write(example)\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 441, in write\r\n self.write_examples_on_file()\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 399, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 492, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow\/array.pxi\", line 230, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 185, in __arrow_array__\r\n out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"pyarrow\/array.pxi\", line 316, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Could not convert with type Image: did not recognize Python value type when inferring an Arrow data type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/rafay\/Documents\/you_only_live_once\/upside_down_detector\/create_dataset.py\", line 55, in \r\n my_test = my_test.map(generate_flipped_data)\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 1953, in map\r\n return self._map_single(\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 519, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 486, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/fingerprint.py\", line 458, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_dataset.py\", line 2360, in _map_single\r\n writer.finalize()\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 522, in finalize\r\n self.write_examples_on_file()\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 399, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 492, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow\/array.pxi\", line 230, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"\/home\/rafay\/anaconda3\/envs\/pytorch_new\/lib\/python3.10\/site-packages\/datasets\/arrow_writer.py\", line 185, in __arrow_array__\r\n out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"pyarrow\/array.pxi\", line 316, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Could not convert with type Image: did not recognize Python value type when inferring an Arrow data type\r\n\r\nProcess finished with exit code 1\r\n```\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.0.0\r\n- Platform: Linux(Fedora 35)\r\n- Python version: 3.10\r\n- PyArrow version: 7.0.0\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4124\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4124\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4123","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4123\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4123\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4123\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4123","id":1196367512,"node_id":"I_kwDODunzps5HTx6Y","number":4123,"title":"Building C4 takes forever","user":{"login":"StellaAthena","id":15899312,"node_id":"MDQ6VXNlcjE1ODk5MzEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15899312?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/StellaAthena","html_url":"https:\/\/github.com\/StellaAthena","followers_url":"https:\/\/api.github.com\/users\/StellaAthena\/followers","following_url":"https:\/\/api.github.com\/users\/StellaAthena\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/StellaAthena\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/StellaAthena\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/StellaAthena\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/StellaAthena\/orgs","repos_url":"https:\/\/api.github.com\/users\/StellaAthena\/repos","events_url":"https:\/\/api.github.com\/users\/StellaAthena\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/StellaAthena\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @StellaAthena, thanks for reporting.\r\n\r\nPlease note, that our `datasets` library performs several operations in order to load a dataset, among them:\r\n- it downloads all the required files: for C4 \"en\", 378.69 GB of JSON GZIPped files\r\n- it parses their content to generate the dataset\r\n- it caches the dataset in an Arrow file: for C4 \"en\", this file size is 1.87 TB\r\n- it memory-maps the Arrow file\r\n\r\nIf it suits your use case, you might load this dataset in streaming mode:\r\n- no Arrow file is generated\r\n- you can iterate over elements immediately (no need to wait to download all the entire files)\r\n\r\n```python\r\nIn [45]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"c4\", \"en\", split=\"train\", streaming=True)\r\n ...: for item in ds:\r\n ...: print(item)\r\n ...: break\r\n ...: \r\n{'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.', 'timestamp': '2019-04-25T12:57:54Z', 'url': 'https:\/\/klyq.com\/beginners-bbq-class-taking-place-in-missoula\/'}\r\n```\r\nI hope this is useful for your use case."],"created_at":1649353290000,"updated_at":1649424139000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nC4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train\/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nc4 = datasets.load(\"c4\", \"en\")\r\n```\r\n\r\n## Expected results\r\nI would like to be able to download pre-split data.\r\n\r\n## Environment info\r\n- `datasets` version: 2.0.0\r\n- Platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4123\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4123\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4122","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4122\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4122\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4122\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4122","id":1196095072,"node_id":"I_kwDODunzps5HSvZg","number":4122,"title":"medical_dialog zh has very slow _generate_examples","user":{"login":"nbroad1881","id":24982805,"node_id":"MDQ6VXNlcjI0OTgyODA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24982805?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nbroad1881","html_url":"https:\/\/github.com\/nbroad1881","followers_url":"https:\/\/api.github.com\/users\/nbroad1881\/followers","following_url":"https:\/\/api.github.com\/users\/nbroad1881\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nbroad1881\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nbroad1881\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nbroad1881\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nbroad1881\/orgs","repos_url":"https:\/\/api.github.com\/users\/nbroad1881\/repos","events_url":"https:\/\/api.github.com\/users\/nbroad1881\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nbroad1881\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @nbroad1881, thanks for reporting.\r\n\r\nLet me have a look to try to improve its performance. ","Thanks @nbroad1881 for reporting! I don't recall it taking so long. I will also have a look at this. \r\n@albertvillanova please let me know if I am doing something unnecessary or time consuming.","Hi @nbroad1881 and @vrindaprabhu,\r\n\r\nAs a workaround for the performance of the parsing of the raw data files (this could be addressed in a subsequent PR), I have found that there are also processed data files, that do not require parsing. I have added these as new configurations `processed.en` and `processed.zh`:\r\n```python\r\nds = load_dataset(\"medical_dialog\", \"processed.zh\")\r\n```"],"created_at":1649340051000,"updated_at":1649434851000,"closed_at":1649434851000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nAfter downloading the files from Google Drive, `load_dataset(\"medical_dialog\", \"zh\", data_dir=\".\/\")` takes an unreasonable amount of time. Generating the train\/test split for 33% of the dataset takes over 4.5 hours.\r\n\r\n## Steps to reproduce the bug\r\nThe easiest way I've found to download files from Google Drive is to use `gdown` and use Google Colab because the download speeds will be very high due to the fact that they are both in Google Cloud.\r\n\r\n```python\r\nfile_ids = [\r\n \"1AnKxGEuzjeQsDHHqL3NqI_aplq2hVL_E\",\r\n \"1tt7weAT1SZknzRFyLXOT2fizceUUVRXX\",\r\n \"1A64VBbsQ_z8wZ2LDox586JIyyO6mIwWc\",\r\n \"1AKntx-ECnrxjB07B6BlVZcFRS4YPTB-J\",\r\n \"1xUk8AAua_x27bHUr-vNoAuhEAjTxOvsu\",\r\n \"1ezKTfe7BgqVN5o-8Vdtr9iAF0IueCSjP\",\r\n \"1tA7bSOxR1RRNqZst8cShzhuNHnayUf7c\",\r\n \"1pA3bCFA5nZDhsQutqsJcH3d712giFb0S\",\r\n \"1pTLFMdN1A3ro-KYghk4w4sMz6aGaMOdU\",\r\n \"1dUSnG0nUPq9TEQyHd6ZWvaxO0OpxVjXD\",\r\n \"1UfCH05nuWiIPbDZxQzHHGAHyMh8dmPQH\",\r\n]\r\nfor i in file_ids:\r\n url = f\"https:\/\/drive.google.com\/uc?id={i}\"\r\n !gdown $url\r\n\r\n\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"medical_dialog\", \"zh\", data_dir=\".\/\")\r\n```\r\n\r\n## Expected results\r\nFaster load time\r\n\r\n## Actual results\r\n`Generating train split: 33%: 625519\/1921127 [4:31:03<31:39:20, 11.37 examples\/s]`\r\n\r\n## Environment info\r\n- `datasets` version: 2.0.0\r\n- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.13\r\n- PyArrow version: 6.0.1\r\n- Pandas version: 1.3.5\r\n\r\n@vrindaprabhu , could you take a look at this since you implemented it? I think the `_generate_examples` function might need to be rewritten","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4122\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4122\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4121","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4121\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4121\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4121\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4121","id":1196000018,"node_id":"I_kwDODunzps5HSYMS","number":4121,"title":"datasets.load_metric can not load a local metirc","user":{"login":"Gare-Ng","id":51749469,"node_id":"MDQ6VXNlcjUxNzQ5NDY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51749469?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Gare-Ng","html_url":"https:\/\/github.com\/Gare-Ng","followers_url":"https:\/\/api.github.com\/users\/Gare-Ng\/followers","following_url":"https:\/\/api.github.com\/users\/Gare-Ng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Gare-Ng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Gare-Ng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Gare-Ng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Gare-Ng\/orgs","repos_url":"https:\/\/api.github.com\/users\/Gare-Ng\/repos","events_url":"https:\/\/api.github.com\/users\/Gare-Ng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Gare-Ng\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649335736000,"updated_at":1649339607000,"closed_at":1649339607000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nNo matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. And it comes back where it begins...\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nmetric = load_metric(path=r'C:\\Users\\Gare\\PycharmProjects\\Gare\\blue\\bleu.py')\r\n ConnectionError: Couldn't reach https:\/\/github.com\/tensorflow\/nmt\/raw\/master\/nmt\/scripts\/bleu.py\r\n\r\nmetric = load_metric(path='bleu')\r\n ConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.12.1\/metrics\/bleu\/bleu.py\r\n\r\nmetric = load_metric(path='.\/blue\/bleu.py')\r\n ConnectionError: Couldn't reach https:\/\/github.com\/tensorflow\/nmt\/raw\/master\/nmt\/scripts\/bleu.py\r\n```\r\n\r\n## Expected results\r\nI do read the docs [here](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/loading_methods#datasets.load_metric). There are no other parameters that help function to distinguish from local and online file but path. As what I code above, it should load from local.\r\n\r\n## Actual results\r\n\r\n> metric = load_metric(path=r'C:\\Users\\Gare\\PycharmProjects\\Gare\\blue\\bleu.py')\r\n\r\n> ~\\AppData\\Local\\Temp\\ipykernel_19636\\1855752034.py in \r\n----> 1 metric = load_metric(path=r'C:\\Users\\Gare\\PycharmProjects\\Gare\\blue\\bleu.py')\r\nD:\\Program Files\\Anaconda\\envs\\Gare\\lib\\site-packages\\datasets\\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)\r\n 817 if data_files is None and data_dir is not None:\r\n 818 data_files = os.path.join(data_dir, \"**\")\r\n--> 819 \r\n 820 self.name = name\r\n 821 self.revision = revision\r\nD:\\Program Files\\Anaconda\\envs\\Gare\\lib\\site-packages\\datasets\\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)\r\n 639 self,\r\n 640 path: str,\r\n--> 641 download_config: Optional[DownloadConfig] = None,\r\n 642 download_mode: Optional[DownloadMode] = None,\r\n 643 dynamic_modules_path: Optional[str] = None,\r\nD:\\Program Files\\Anaconda\\envs\\Gare\\lib\\site-packages\\datasets\\utils\\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 297 token = hf_api.HfFolder.get_token()\r\n 298 if token:\r\n--> 299 headers[\"authorization\"] = f\"Bearer {token}\"\r\n 300 return headers\r\n 301 \r\nD:\\Program Files\\Anaconda\\envs\\Gare\\lib\\site-packages\\datasets\\utils\\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 604 def _resumable_file_manager():\r\n 605 with open(incomplete_path, \"a+b\") as f:\r\n--> 606 yield f\r\n 607 \r\n 608 temp_file_manager = _resumable_file_manager\r\nConnectionError: Couldn't reach https:\/\/github.com\/tensorflow\/nmt\/raw\/master\/nmt\/scripts\/bleu.py\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.0.0\r\n- Platform: Windows-10-10.0.22000-SP0\r\n- Python version: 3.7.13\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.3.4\r\n\r\nAny advice would be appreciated.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4121\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4121\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4120","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4120\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4120\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4120\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4120","id":1195887430,"node_id":"I_kwDODunzps5HR8tG","number":4120,"title":"Representing dictionaries (json) objects as features","user":{"login":"yanaiela","id":8031035,"node_id":"MDQ6VXNlcjgwMzEwMzU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8031035?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yanaiela","html_url":"https:\/\/github.com\/yanaiela","followers_url":"https:\/\/api.github.com\/users\/yanaiela\/followers","following_url":"https:\/\/api.github.com\/users\/yanaiela\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yanaiela\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yanaiela\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yanaiela\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yanaiela\/orgs","repos_url":"https:\/\/api.github.com\/users\/yanaiela\/repos","events_url":"https:\/\/api.github.com\/users\/yanaiela\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yanaiela\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649329661000,"updated_at":1649329661000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https:\/\/discuss.huggingface.co\/t\/representing-nested-dictionary-with-different-keys\/16442).\r\n\r\nFor instance:\r\n\r\n```\r\nsample1 = {\"nps\": {\r\n \"a\": {\"id\": 0, \"text\": \"text1\"},\r\n \"b\": {\"id\": 1, \"text\": \"text2\"},\r\n}}\r\nsample2 = {\"nps\": {\r\n \"a\": {\"id\": 0, \"text\": \"text1\"},\r\n \"b\": {\"id\": 1, \"text\": \"text2\"},\r\n \"c\": {\"id\": 2, \"text\": \"text3\"},\r\n}}\r\nsample3 = {\"nps\": {\r\n \"a\": {\"id\": 0, \"text\": \"text1\"},\r\n \"b\": {\"id\": 1, \"text\": \"text2\"},\r\n \"c\": {\"id\": 2, \"text\": \"text3\"},\r\n \"d\": {\"id\": 3, \"text\": \"text4\"},\r\n}}\r\n```\r\n\r\nthe `nps` field cannot be represented as a Feature while maintaining its original structure.\r\n@lhoestq suggested to add JSON as a new feature type, which will solve this problem.\r\n\r\n\r\nIt seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4120\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4120\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4119","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4119\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4119\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4119\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4119","id":1195641298,"node_id":"PR_kwDODunzps41yXHF","number":4119,"title":"Hotfix failing CI tests on Windows","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649317126000,"updated_at":1649324844000,"closed_at":1649318233000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR makes a hotfix for our CI Windows tests: https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/11092\/workflows\/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc\/jobs\/67414\r\n\r\nFix #4118\r\n\r\nI guess this issue is related to this PR:\r\n- huggingface\/huggingface_hub#815","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4119\/reactions","total_count":1,"+1":1,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4119\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4119","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4119","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4119.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4119.patch","merged_at":1649318233000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4118","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4118\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4118\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4118\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4118","id":1195638944,"node_id":"I_kwDODunzps5HRACg","number":4118,"title":"Failing CI tests on Windows","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1649316985000,"updated_at":1649318233000,"closed_at":1649318233000,"author_association":"MEMBER","active_lock_reason":null,"body":"## Describe the bug\r\nOur CI Windows tests are failing from yesterday: https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/11092\/workflows\/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc\/jobs\/67414\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4118\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4118\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4117","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4117\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4117\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4117\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4117","id":1195552406,"node_id":"I_kwDODunzps5HQq6W","number":4117,"title":"AttributeError: module 'huggingface_hub' has no attribute 'hf_api'","user":{"login":"arymbe","id":4567991,"node_id":"MDQ6VXNlcjQ1Njc5OTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4567991?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arymbe","html_url":"https:\/\/github.com\/arymbe","followers_url":"https:\/\/api.github.com\/users\/arymbe\/followers","following_url":"https:\/\/api.github.com\/users\/arymbe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arymbe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arymbe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arymbe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arymbe\/orgs","repos_url":"https:\/\/api.github.com\/users\/arymbe\/repos","events_url":"https:\/\/api.github.com\/users\/arymbe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arymbe\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @arymbe, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your problem.\r\n\r\nCould you please write the complete stack trace? That way we will be able to see which package originates the exception.","Hello, thank you for your fast replied. this is the complete error that I got\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\nInput In [27], in \r\n----> 1 from datasets import load_dataset\r\n\r\nvenv\/lib\/python3.8\/site-packages\/datasets\/__init__.py:39, in \r\n 37 from .arrow_dataset import Dataset, concatenate_datasets\r\n 38 from .arrow_reader import ReadInstruction\r\n---> 39 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n 40 from .combine import interleave_datasets\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n\r\nvenv\/lib\/python3.8\/site-packages\/datasets\/builder.py:40, in \r\n 32 from .arrow_reader import (\r\n 33 HF_GCP_BASE_URL,\r\n 34 ArrowReader,\r\n (...)\r\n 37 ReadInstruction,\r\n 38 )\r\n 39 from .arrow_writer import ArrowWriter, BeamWriter\r\n---> 40 from .data_files import DataFilesDict, sanitize_patterns\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n 42 from .features import Features\r\n\r\nvenv\/lib\/python3.8\/site-packages\/datasets\/data_files.py:297, in \r\n 292 except FileNotFoundError:\r\n 293 raise FileNotFoundError(f\"The directory at {base_path} doesn't contain any data file\") from None\r\n 296 def _resolve_single_pattern_in_dataset_repository(\r\n--> 297 dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n 298 pattern: str,\r\n 299 allowed_extensions: Optional[list] = None,\r\n 300 ) -> List[PurePath]:\r\n 301 data_files_ignore = FILES_TO_IGNORE\r\n 302 fs = HfFileSystem(repo_info=dataset_info)\r\n\r\nAttributeError: module 'huggingface_hub' has no attribute 'hf_api'","This is weird... It is long ago that the package `huggingface_hub` has a submodule called `hf_api`.\r\n\r\nMaybe you have a problem with your installed `huggingface_hub`...\r\n\r\nCould you please try to update it?\r\n```shell\r\npip install -U huggingface_hub\r\n```","Yap, I've updated several times. Then, I've tried numeral combination of datasets and huggingface_hub versions. However, I think your point is right that there is a problem with my huggingface_hub installation. I'll try another way to find the solution. I'll update it later when I get the solution. Thank you :)","I'm sorry I can't reproduce your problem.\r\n\r\nMaybe you could try to create a new Python virtual environment and install all dependencies there from scratch. You can use either:\r\n- Python venv: https:\/\/docs.python.org\/3\/library\/venv.html\r\n- or conda venv (if you are using conda): https:\/\/docs.conda.io\/projects\/conda\/en\/latest\/user-guide\/tasks\/manage-environments.html","Facing the same issue.\r\n\r\nResponse from `pip show datasets`\r\n```\r\nName: datasets\r\nVersion: 1.15.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https:\/\/github.com\/huggingface\/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: thomas@huggingface.co\r\nLicense: Apache 2.0\r\nLocation: \/usr\/local\/lib\/python3.8\/dist-packages\r\nRequires: aiohttp, dill, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, requests, tqdm, xxhash\r\nRequired-by: lm-eval\r\n```\r\n\r\nResponse from `pip show huggingface_hub`\r\n\r\n```\r\nName: huggingface-hub\r\nVersion: 0.8.1\r\nSummary: Client library to download and publish models, datasets and other repos on the huggingface.co hub\r\nHome-page: https:\/\/github.com\/huggingface\/huggingface_hub\r\nAuthor: Hugging Face, Inc.\r\nAuthor-email: julien@huggingface.co\r\nLicense: Apache\r\nLocation: \/usr\/local\/lib\/python3.8\/dist-packages\r\nRequires: filelock, packaging, pyyaml, requests, tqdm, typing-extensions\r\nRequired-by: datasets\r\n```\r\n\r\nresponse from `datasets-cli env`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/bin\/datasets-cli\", line 5, in \r\n from datasets.commands.datasets_cli import main\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/datasets\/__init__.py\", line 37, in \r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/datasets\/builder.py\", line 44, in \r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/datasets\/data_files.py\", line 120, in \r\n dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/huggingface_hub\/__init__.py\", line 105, in __getattr__\r\n raise AttributeError(f\"No {package_name} attribute {name}\")\r\nAttributeError: No huggingface_hub attribute hf_api\r\n```","A workaround: \r\nI changed lines around Line 125 in `__init__.py` of `huggingface_hub` to something like\r\n```\r\n__getattr__, __dir__, __all__ = _attach(\r\n __name__,\r\n submodules=['hf_api'],\r\n```\r\nand it works ( which gives `datasets` direct access to `huggingface_hub.hf_api` ).","I was getting the same issue. After trying a few versions, following combination worked for me.\r\ndataset==2.3.2\r\nhuggingface_hub==0.7.0\r\n\r\nIn another environment, I just installed latest repos from pip through `pip install -U transformers datasets tokenizers evaluate`, resulting in following versions. This also worked. Hope it helps someone. \r\n\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.20.1","For layoutlm_v3 finetune\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5","(For layoutlmv3 fine-tuning) In my case, modifying `requirements.txt` as below worked.\r\n\r\n- python = 3.7\r\n\r\n```\r\ndatasets==2.3.2\r\nevaluate==0.1.2\r\nhuggingface-hub==0.8.1\r\nresponse==0.5.0\r\ntokenizers==0.10.1\r\ntransformers==4.12.5\r\nseqeval==1.2.2\r\ndeepspeed==0.5.7\r\ntensorboard==2.7.0\r\nseqeval==1.2.2\r\nsentencepiece\r\ntimm==0.4.12\r\nPillow\r\neinops\r\ntextdistance\r\nshapely\r\n```","> For layoutlm_v3 finetune datasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5\r\n\r\nGOOD!! Thanks!"],"created_at":1649310756000,"updated_at":1659026644000,"closed_at":1650382595000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nCould you help me please. I got this following error.\r\n\r\nAttributeError: module 'huggingface_hub' has no attribute 'hf_api'\r\n\r\n## Steps to reproduce the bug\r\nwhen I imported the datasets\r\n\r\n# Sample code to reproduce the bug\r\nfrom datasets import list_datasets, load_dataset, list_metrics, load_metric\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.0.0\r\n- Platform: macOS-12.3-x86_64-i386-64bit\r\n- Python version: 3.8.9\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.3.5\r\n- Huggingface-hub: 0.5.0\r\n- Transformers: 4.18.0\r\n\r\nThank you in advance.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4117\/reactions","total_count":2,"+1":2,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4117\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4116","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4116\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4116\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4116\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4116","id":1194926459,"node_id":"PR_kwDODunzps41wCEO","number":4116,"title":"Pretty print dataset info files","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["maybe just do it from now on no? (i.e. not for existing `dataset_infos.json` files)","_The documentation is not available anymore as the PR was closed or merged._","> maybe just do it from now on no? (i.e. not for existing dataset_infos.json files)\r\n\r\nYes, or do this only for datasets created with `push_to_hub` to (always) keep the GH datasets small? \r\n","yep sounds good too on my side! ","I reverted the change to avoid the size increase and added the `pretty_print` flag, which pretty-prints the JSON, and that flag is only True for datasets created with `push_to_hub`. "],"created_at":1649266848000,"updated_at":1649417281000,"closed_at":1649416913000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Adds indentation to the `dataset_infos.json` file when saving for nicer diffs.\r\n\r\n(suggested by @julien-c)\r\n\r\nThis PR also updates the info files of the GH datasets. Note that this change adds more than **10 MB** to the repo size (the total file size before the change: 29.672298 MB, after: 41.666475 MB), so I'm not sure this change is a good idea.\r\n\r\n`src\/datasets\/info.py` is the only relevant file for reviewers.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4116\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4116\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4116","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4116","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4116.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4116.patch","merged_at":1649416913000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4115","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4115\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4115\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4115\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4115","id":1194907555,"node_id":"I_kwDODunzps5HONej","number":4115,"title":"ImageFolder add option to ignore some folders like '.ipynb_checkpoints'","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko ","Maybe we can add a `ignore_hidden_files` flag to the builder configs of our packaged loaders (to be consistent across all of them), wdyt @lhoestq @albertvillanova? ","I think they should always ignore them actually ! Not sure if adding a flag would be helpful","@lhoestq But what if the user explicitly requests those files via regex?\r\n\r\n`glob.glob` ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's `glob` doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?","> @lhoestq But what if the user explicitly requests those files via regex?\r\n\r\nUsually hidden files are meant to be ignored. If they are data files, they must be placed outside a hidden directory in the first place right ? I think it's more sensible to explain this than adding a flag.\r\n\r\n> glob.glob ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's glob doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?\r\n\r\nAfter globbing using `fsspec`, we already ignore files that start with a `.` in `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository`, I guess we can just account for parent directories as well ?\r\n\r\nWe could open an issue on `fsspec` but I think they won't change this since it's an important breaking change for them."],"created_at":1649266183000,"updated_at":1654088656000,"closed_at":1654088656000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nI sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large.\r\n\r\n**Describe the solution you'd like**\r\nmaybe have an option `ignore` or something .gitignore style\r\n`dataset = load_dataset(\"imagefolder\", data_dir=\".\/data\/original\", ignore=\"regex?\")`\r\n\r\n**Describe alternatives you've considered**\r\nCould filter out manually\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4115\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4115\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4114","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4114\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4114\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4114\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4114","id":1194855345,"node_id":"I_kwDODunzps5HOAux","number":4114,"title":"Allow downloading just some columns of a dataset","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["In the general case you can\u2019t always reduce the quantity of data to download, since you can\u2019t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess","Actually for csv pandas has `usecols` which allows loading a subset of columns in a more efficient way afaik, but yes, you're right this might be more complex than I thought."],"created_at":1649263126000,"updated_at":1649318186000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nSome people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case\r\n\r\n**Describe the solution you'd like**\r\nBe able to just download some columns of a dataset, such as doing\r\n```python\r\nload_dataset(\"huggan\/wikiart\",columns=[\"artist\", \"genre\"])\r\n```\r\n\r\nAlthough this might make things a bit complicated in terms of local caching of datasets.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4114\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4114\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4113","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4113\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4113\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4113\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4113","id":1194843532,"node_id":"I_kwDODunzps5HN92M","number":4113,"title":"Multiprocessing with FileLock fails in python 3.9","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649262429000,"updated_at":1649262429000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"body":"On python 3.9, this code hangs:\r\n```python\r\nfrom multiprocessing import Pool\r\nfrom filelock import FileLock\r\n\r\n\r\ndef run(i):\r\n print(f\"got the lock in multi process [{i}]\")\r\n\r\n\r\nwith FileLock(\"tmp.lock\"):\r\n with Pool(2) as pool:\r\n pool.map(run, range(2))\r\n\r\n```\r\n\r\nThis is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python.\r\n\r\nThis can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well.\r\n\r\nLet's see if we can fix this and have a CI that runs on 3.9.\r\n\r\ncc @mariosasko @julien-c ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4113\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4113\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4112","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4112\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4112\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4112\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4112","id":1194752765,"node_id":"I_kwDODunzps5HNnr9","number":4112,"title":"ImageFolder with Grayscale images dataset","user":{"login":"ChainYo","id":50595514,"node_id":"MDQ6VXNlcjUwNTk1NTE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50595514?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ChainYo","html_url":"https:\/\/github.com\/ChainYo","followers_url":"https:\/\/api.github.com\/users\/ChainYo\/followers","following_url":"https:\/\/api.github.com\/users\/ChainYo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ChainYo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ChainYo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ChainYo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ChainYo\/orgs","repos_url":"https:\/\/api.github.com\/users\/ChainYo\/repos","events_url":"https:\/\/api.github.com\/users\/ChainYo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ChainYo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Replacing:\r\n```python\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n```\r\n\r\nwith:\r\n```python\r\ndef transform_func(examples):\r\n examples[\"image\"] = [transforms(img).to(\"cuda\") for img in examples[\"image\"]]\r\n return examples\r\n\r\ntransformed_dataset = dataset.with_transform(transform_func)\r\n```\r\nshould fix the issue. `datasets` doesn't support chaining of transforms (you can think of `set_format`\/`with_format` as a predefined transform func for `set_transform`\/`with_transforms`), so the last transform (in your case, `set_format`) takes precedence over the previous ones (in your case `with_format`). And the PyTorch formatter is not supported by the Image feature, hence the error (adding support for that is on our short-term roadmap).","Ok thanks a lot for the code snippet!\r\n\r\nI love the way `datasets` is easy to use but it made it really long to pre-process all the images (400.000 in my case) before training anything. `ImageFolder` from pytorch is faster in my case but force me to have the images on my local machine.\r\n\r\nI don't know how to speed up the process without switching to `ImageFolder` :smile: ","You can pass `ignore_verifications=True` in `load_dataset` to skip checksum verification, which takes a lot of time if the number of files is large. We will consider making this the default behavior."],"created_at":1649257800000,"updated_at":1650622913000,"closed_at":1650622912000,"author_association":"NONE","active_lock_reason":null,"body":"Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https:\/\/huggingface.co\/datasets\/ChainYo\/rvl-cdip) (RVL-CDIP)\r\n\r\nI'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:\r\n\r\n```bash\r\nAttributeError: Caught AttributeError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/torch\/utils\/data\/_utils\/worker.py\", line 287, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 49, in \r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1765, in __getitem__\r\n return self._getitem(\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1750, in _getitem\r\n formatted_output = format_table(\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py\", line 532, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/formatting\/formatting.py\", line 281, in __call__\r\n return self.format_row(pa_table)\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/formatting\/torch_formatter.py\", line 58, in format_row\r\n return self.recursive_tensorize(row)\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/formatting\/torch_formatter.py\", line 54, in recursive_tensorize\r\n return map_nested(self._recursive_tensorize, data_struct, map_list=False)\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 314, in map_nested\r\n mapped = [\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 315, in \r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 267, in _single_map_nested\r\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 267, in \r\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 251, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/formatting\/torch_formatter.py\", line 51, in _recursive_tensorize\r\n return self._tensorize(data_struct)\r\n File \"\/home\/chainyo\/miniconda3\/envs\/gan-bird\/lib\/python3.8\/site-packages\/datasets\/formatting\/torch_formatter.py\", line 38, in _tensorize\r\n if np.issubdtype(value.dtype, np.integer):\r\nAttributeError: 'bytes' object has no attribute 'dtype'\r\n```\r\n\r\nI don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well):\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"imagefolder\", data_dir=\"data\/train\")\r\ntrain_dataset = train_dataset[\"train\"]\r\ntest_dataset = load_dataset(\"imagefolder\", data_dir=\"data\/test\")\r\ntest_dataset = test_dataset[\"train\"]\r\nval_dataset = load_dataset(\"imagefolder\", data_dir=\"data\/val\")\r\nval_dataset = val_dataset[\"train\"]\r\n\r\ndataset = DatasetDict({\r\n \"train\": train_dataset,\r\n \"val\": val_dataset,\r\n \"test\": test_dataset\r\n})\r\ndataset.push_to_hub(\"ChainYo\/rvl-cdip\")\r\n```\r\n\r\nNow here is the code I am using to get the dataset and prepare it for training:\r\n\r\n```python\r\nimg_size = 512\r\nbatch_size = 128\r\nnormalize = [(0.5), (0.5)]\r\ndata_dir = \"ChainYo\/rvl-cdip\"\r\n\r\ndataset = load_dataset(data_dir, split=\"train\")\r\n\r\ntransforms = transforms.Compose([\r\n transforms.Resize(img_size), \r\n transforms.CenterCrop(img_size), \r\n transforms.ToTensor(), \r\n transforms.Normalize(*normalize)\r\n])\r\n\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n\r\ntrain_dataloader = torch.utils.data.DataLoader(\r\n transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True\r\n)\r\n```\r\n\r\nBut this get me the error above. I don't understand why it's doing this kind of weird thing?\r\nDo I need to map something on the dataset? Something like this:\r\n\r\n```python\r\nlabels = dataset.features[\"label\"].names\r\nnum_labels = dataset.features[\"label\"].num_classes\r\n\r\n\r\ndef preprocess_data(examples):\r\n images = [ex.convert(\"RGB\") for ex in examples[\"image\"]]\r\n labels = [ex for ex in examples[\"label\"]]\r\n return {\"images\": images, \"labels\": labels}\r\n\r\n\r\nfeatures = Features({\r\n \"images\": Image(decode=True, id=None),\r\n \"labels\": ClassLabel(num_classes=num_labels, names=labels)\r\n})\r\n\r\n\r\ndecoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100)\r\n```\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4112\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4112\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4111","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4111\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4111\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4111\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4111","id":1194660699,"node_id":"PR_kwDODunzps41vJCt","number":4111,"title":"Update security policy","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649253591000,"updated_at":1649324790000,"closed_at":1649324427000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4111\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4111\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4111","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4111","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4111.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4111.patch","merged_at":1649324427000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4110","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4110\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4110\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4110\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4110","id":1194581375,"node_id":"PR_kwDODunzps41u4Je","number":4110,"title":"Matthews Correlation Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649249975000,"updated_at":1651585397000,"closed_at":1651584973000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4110\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4110\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4110","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4110","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4110.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4110.patch","merged_at":1651584972000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4109","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4109\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4109\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4109\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4109","id":1194579257,"node_id":"PR_kwDODunzps41u3sm","number":4109,"title":"Add Spearmanr Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","changes made! @lhoestq let me know what you think ","The CI fail is unrelated to this PR and fixed on master, feel free to merge :)"],"created_at":1649249873000,"updated_at":1651596626000,"closed_at":1651596217000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4109\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4109\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4109","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4109","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4109.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4109.patch","merged_at":1651596217000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4108","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4108\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4108\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4108\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4108","id":1194578584,"node_id":"PR_kwDODunzps41u3j2","number":4108,"title":"Perplexity Speedup","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["WRT the high values, can you add some unit tests with some [string, model] pairs and their resulting perplexity code, and @TristanThrush can run the same pairs through his version of the code?","_The documentation is not available anymore as the PR was closed or merged._","I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does).\r\n@lhoestq , @TristanThrush thoughts?","> I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does). @lhoestq , @TristanThrush thoughts?\r\n\r\nI support this change from Emi. If we have a perplexity function that loads GPT2 and then returns an average over all of the strings, then it is impossible to get multiple perplexities of a batch of strings efficiently. If we have this new perplexity function that is built for batching, then it is possible to get a batch of perplexities efficiently and you can still compute the average efficiently afterwards.","Thanks a lot for working on this @emibaylor @TristanThrush :)\r\n\r\nFor consistency with the other metrics, I think it's nice if we return the mean perplexity. Though I agree that having the separate perplexities per sample can also be useful. What do you think about returning both ?\r\n```python\r\nreturn {\"perplexities\": ppls, \"mean_perplexity\": np.mean(ppls)}\r\n```\r\nwe're also doing this for the COMET metric.","> Thanks a lot for working on this @emibaylor @TristanThrush :)\r\n> \r\n> For consistency with the other metrics, I think it's nice if we return the mean perplexity. Though I agree that having the separate perplexities per sample can also be useful. What do you think about returning both ?\r\n> \r\n> ```python\r\n> return {\"perplexities\": ppls, \"mean_perplexity\": np.mean(ppls)}\r\n> ```\r\n> \r\n> we're also doing this for the COMET metric.\r\n\r\nThanks! Sounds great to me.","The CI fail is unrelated to your PR and has been fixed on master, feel free to merge the master branch into your PR to fix the CI ;)"],"created_at":1649249841000,"updated_at":1650459654000,"closed_at":1650459282000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR makes necessary changes to perplexity such that:\r\n- it runs much faster (via batching)\r\n- it throws an error when input is empty, or when input is one word without token\r\n- it adds the option to add a token\r\n\r\nIssues:\r\n- The values returned are extremely high, and I'm worried they aren't correct. Even if they are correct, they are sometimes returned as `inf`, which is not very useful (see [comment below](https:\/\/github.com\/huggingface\/datasets\/pull\/4108#discussion_r843931094) for some of the output values). \r\n - If the values are not correct, can you help me find the error?\r\n - If the values are correct, it might be worth it to measure something like perplexity per word, which would allow us to get actual values for the larger perplexities, instead of just `inf`\r\n\r\nFuture:\r\n- `stride` is not currently implemented here. I have some thoughts on how to make it happen with batching, but I think it would be better to get another set of eyes to look at any possible errors causing such large values now rather than later.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4108\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4108\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4108","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4108","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4108.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4108.patch","merged_at":1650459282000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4107","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4107\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4107\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4107\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4107","id":1194484885,"node_id":"I_kwDODunzps5HMmSV","number":4107,"title":"Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows","user":{"login":"Pavithree","id":23344465,"node_id":"MDQ6VXNlcjIzMzQ0NDY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23344465?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Pavithree","html_url":"https:\/\/github.com\/Pavithree","followers_url":"https:\/\/api.github.com\/users\/Pavithree\/followers","following_url":"https:\/\/api.github.com\/users\/Pavithree\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Pavithree\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Pavithree\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Pavithree\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Pavithree\/orgs","repos_url":"https:\/\/api.github.com\/users\/Pavithree\/repos","events_url":"https:\/\/api.github.com\/users\/Pavithree\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Pavithree\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting. I'm looking at it"," It's not related to the dataset viewer in itself. I can replicate the error with:\r\n\r\n```\r\n>>> import datasets as ds\r\n>>> d = ds.load_dataset('Pavithree\/explainLikeImFive')\r\nUsing custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51\r\nDownloading and preparing dataset json\/Pavithree--explainLikeImFive to \/home\/slesage\/.cache\/huggingface\/datasets\/json\/Pavithree--explainLikeImFive-b68b6d8112cd8a51\/0.0.0\/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b...\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 305M\/305M [00:03<00:00, 98.6MB\/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 17.9M\/17.9M [00:00<00:00, 75.7MB\/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 11.9M\/11.9M [00:00<00:00, 70.6MB\/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:05<00:00, 1.92s\/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3\/3 [00:00<00:00, 1948.42it\/s]\r\nFailed to read file '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/5fee9c8819754df277aee6f252e4db6897d785231c21938407b8862ca871d246' with error : Exceeded maximum rows\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/packaged_modules\/json\/json.py\", line 144, in _generate_tables\r\n dataset = json.load(f)\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/json\/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/json\/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/json\/decoder.py\", line 340, in decode\r\n raise JSONDecodeError(\"Extra data\", s, end)\r\njson.decoder.JSONDecodeError: Extra data: line 1 column 916 (char 915)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/load.py\", line 1691, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 1151, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"\/home\/slesage\/.pyenv\/versions\/datasets\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1168, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/packaged_modules\/json\/json.py\", line 146, in _generate_tables\r\n raise e\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/packaged_modules\/json\/json.py\", line 122, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow\/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow\/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```\r\n\r\ncc @lhoestq @albertvillanova @mariosasko ","It seems that train.json is not a valid JSON Lines file: it has several JSON objects in the first line (the 915th character in the first line starts a new object, and there's no \"\\n\")\r\n\r\nYou need to have one JSON object per line","I'm closing this issue.\r\n\r\n@Pavithree, please, feel free to re-open it if fixing the JSON file does not solve it.","Thank you! that fixes the issue."],"created_at":1649245035000,"updated_at":1649401987000,"closed_at":1649255995000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows\r\n\r\n**Link:** *https:\/\/huggingface.co\/datasets\/Pavithree\/explainLikeImFive*\r\n\r\n*This is the subset of original eli5 dataset https:\/\/huggingface.co\/datasets\/vblagoje\/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error:\r\nStatus code: 400\r\nException: ArrowInvalid\r\nMessage: Exceeded maximum rows\r\nWhen I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error*\r\n\r\nAm I the one who added this dataset ? Yes \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4107\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4107\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4106","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4106\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4106\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4106\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4106","id":1194393892,"node_id":"PR_kwDODunzps41uPpa","number":4106,"title":"Support huggingface_hub 0.5","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like GH actions is not able to resolve `huggingface_hub` 0.5.0, I'm investivating","_The documentation is not available anymore as the PR was closed or merged._","I'm glad to see changes in `huggingface_hub` are simplifying code here.","seems to supersede #4102, feel free to close mine :)","maybe just cherry-pick the docstring fix","I think I've found the issue:\r\n- https:\/\/github.com\/huggingface\/huggingface_hub\/pull\/790","Good catch, `huggingface_hub` doesn't support python 3.6 anymore indeed, therefore we should keep support for 0.4.0. I'm reverting the requirement version bump for now.\r\n\r\nWe can update the requirement once we drop support for python 3.6 in `datasets`","@lhoestq, I've opened this PR on `huggingface_hub`: \r\n- https:\/\/github.com\/huggingface\/huggingface_hub\/pull\/823\r\n\r\nIs there any strong reason why `huggingface_hub` no longer supports Python 3.6? ","I think `datasets` can drop support for 3.6 soon. But for now maybe let's keep support for 0.4.0, python 3.6 users are not affected by https:\/\/github.com\/huggingface\/datasets\/issues\/4105 anyway.\r\n\r\n`huggingface_hub` doesn't not have to support 3.6 again just for the CI IMO","@lhoestq I commented on the PR, that IMO it is not a good practice to drop support for Python 3.6 without a previous deprecation cycle.","Re-added support for older versions. I ended up checking `huggingface_hub` version to use the old, deprecated API for <0.5.0","I find it good practice to have all dependency version related code in a single file so that when you decide to remove support for an old version of a dependency it's easy to find and remove them, hence suggesting `utils\/_fixes.py` in https:\/\/github.com\/huggingface\/datasets\/issues\/4105#issuecomment-1090041204","good idea, thanks !","I used your suggestion @adrinjalali , I just replace the try\/except with a check on the version of `huggingface_hub`"],"created_at":1649240125000,"updated_at":1649413723000,"closed_at":1649413343000,"author_association":"MEMBER","active_lock_reason":null,"body":"Following https:\/\/github.com\/huggingface\/datasets\/issues\/4105\r\n\r\n`huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, and I set the `hugginface_hub` requirement to `>=0.5.0`<\/s>\r\n\r\ncc @adrinjalali @LysandreJik ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4106\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4106\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4106","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4106","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4106.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4106.patch","merged_at":1649413343000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4105","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4105\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4105\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4105\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4105","id":1194297119,"node_id":"I_kwDODunzps5HL4cf","number":4105,"title":"push to hub fails with huggingface-hub 0.5.0","user":{"login":"frascuchon","id":2518789,"node_id":"MDQ6VXNlcjI1MTg3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2518789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/frascuchon","html_url":"https:\/\/github.com\/frascuchon","followers_url":"https:\/\/api.github.com\/users\/frascuchon\/followers","following_url":"https:\/\/api.github.com\/users\/frascuchon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/frascuchon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/frascuchon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/frascuchon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/frascuchon\/orgs","repos_url":"https:\/\/api.github.com\/users\/frascuchon\/repos","events_url":"https:\/\/api.github.com\/users\/frascuchon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/frascuchon\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43\/src\/datasets\/arrow_dataset.py#L3363-L3369\r\n\r\nI think we should fix that in `huggingface_hub`, will keep you posted. In the meantime please use `huggingface_hub` 0.4.0","I'll be sending a fix for this later today on the `huggingface_hub` side.\r\n\r\nThe error would be converted to a `FutureWarning` if `datasets` uses kwargs instead of positional, for example here: \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43\/src\/datasets\/arrow_dataset.py#L3363-L3369\r\n\r\nto be:\r\n\r\n``` python\r\n api.create_repo(\r\n name=dataset_name,\r\n token=token,\r\n repo_type=\"dataset\",\r\n organization=organization,\r\n private=private,\r\n )\r\n```\r\n\r\nBut `name` and `organization` are deprecated in `huggingface_hub=0.5`, and people should pass `repo_id='org\/name` instead. Note that `repo_id` was introduced in 0.5 and if `datasets` wants to support older `huggingface_hub` versions (which I encourage it to do), there needs to be a helper function to do that. It can be something like:\r\n\r\n\r\n```python\r\ndef create_repo(\r\n client,\r\n name: str,\r\n token: Optional[str] = None,\r\n organization: Optional[str] = None,\r\n private: Optional[bool] = None,\r\n repo_type: Optional[str] = None,\r\n exist_ok: Optional[bool] = False,\r\n space_sdk: Optional[str] = None,\r\n) -> str:\r\n try:\r\n return client.create_repo(\r\n repo_id=f\"{organization}\/{name}\",\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n except TypeError:\r\n return client.create_repo(\r\n name=name,\r\n organization=organization,\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n```\r\n\r\nin a `utils\/_fixes.py` kinda file and and be used internally.\r\n\r\nI'll be sending a patch to `huggingface_hub` to convert the error reported in this issue to a `FutureWarning`.","PR with the hotfix on the `huggingface_hub` side: https:\/\/github.com\/huggingface\/huggingface_hub\/pull\/822","We can definitely change `push_to_hub` to use `repo_id` in `datasets` and require `huggingface_hub>=0.5.0`.\r\n\r\nLet me open a PR :)","`huggingface_hub` 0.5.1 just got released with a fix, feel free to update `huggingface_hub` ;)"],"created_at":1649235597000,"updated_at":1649860247000,"closed_at":1649860247000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n`ds.push_to_hub` is failing when updating a dataset in the form \"org_id\/repo_id\"\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"rubrix\/news_test\")\r\nds.push_to_hub(\"\/news_test\", token=\"\")\r\n```\r\n\r\n## Expected results\r\nThe dataset is successfully uploaded\r\n\r\n## Actual results\r\nAn error validation is raised:\r\n\r\n```bash\r\nif repo_id and (name or organization):\r\n> raise ValueError(\r\n \"Only pass `repo_id` and leave deprecated `name` and \"\r\n \"`organization` to be None.\"\r\nE ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None.\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.18.1\r\n- `huggingface-hub`: 0.5\r\n- Platform: macOS\r\n- Python version: 3.8.12\r\n- PyArrow version: 6.0.0\r\n\r\ncc @adrinjalali \r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4105\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4105\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4104","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4104\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4104\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4104\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4104","id":1194072966,"node_id":"I_kwDODunzps5HLBuG","number":4104,"title":"Add time series data - stock market","user":{"login":"INF800","id":45640029,"node_id":"MDQ6VXNlcjQ1NjQwMDI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45640029?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/INF800","html_url":"https:\/\/github.com\/INF800","followers_url":"https:\/\/api.github.com\/users\/INF800\/followers","following_url":"https:\/\/api.github.com\/users\/INF800\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/INF800\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/INF800\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/INF800\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/INF800\/orgs","repos_url":"https:\/\/api.github.com\/users\/INF800\/repos","events_url":"https:\/\/api.github.com\/users\/INF800\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/INF800\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can I use instructions present in below link for time series dataset as well? \r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md ","cc'ing @kashif and @NielsRogge for visibility!","@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly point me to the dataset? Also, note we have a bunch of time series datasets checked in e.g. `electricity_load_diagrams` or `monash_tsf`, and ideally this dataset could also be in a similar format. ","Thankyou. This is how raw data looks like before cleaning for an individual stocks:\r\n\r\n1. https:\/\/github.com\/INF800\/marktech\/tree\/raw-data\/f\/data\/raw\r\n2. https:\/\/github.com\/INF800\/marktech\/tree\/raw-data\/t\/data\/raw\r\n3. https:\/\/github.com\/INF800\/marktech\/tree\/raw-data\/rdfn\/data\/raw\r\n4. https:\/\/github.com\/INF800\/marktech\/tree\/raw-data\/irbt\/data\/raw\r\n5. https:\/\/github.com\/INF800\/marktech\/tree\/raw-data\/hll\/data\/raw\r\n6. https:\/\/github.com\/INF800\/marktech\/tree\/raw-data\/infy\/data\/raw\r\n7. https:\/\/github.com\/INF800\/marktech\/tree\/raw-data\/reli\/data\/raw\r\n8. https:\/\/github.com\/INF800\/marktech\/tree\/raw-data\/hdbk\/data\/raw\r\n\r\n> Scraping is automated using GitHub Actions. So, everyday we will see a new file added in the above links.\r\n\r\nI can rewrite the cleaning scripts to make sure it fits HF dataset standards. (P.S I am very much new to HF dataset)\r\n\r\nThe data set above can be converted into univariate regression \/ multivariate regression \/ sequence to sequence generation dataset etc. So, do we have some kind of transformation modules that will read the dataset as some type of dataset (`GenericTimeData`) and convert it to other possible dataset relating to a specific ML task. **By having this kind of transformation module, I only have to add data once** and use transformation module whenever necessary\r\n\r\nAdditionally, having some kind of versioning for the dataset will be really helpful because it will keep on updating - especially time series datasets ","thanks @INF800 I'll have a look. I believe it should be possible to incorporate this into the time-series format.","Referencing https:\/\/github.com\/qingsongedu\/time-series-transformers-review","@INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare\/contrast and build upon the research etc. but I am not ready to share them publicly just yet.\r\n\r\nIn any case regarding your dataset at the moment its seems from looking at the csv files, its mixture of textual and numerical data, sometimes in the same column etc. As you know, for time series models we would need just numeric data so I would need your help in disambiguating the dataset you have collected and also perhaps starting with just numerical data to start with... \r\n\r\nDo you think you can make a version with just numerical data?","> @INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare\/contrast and build upon the research etc. but I am not ready to share them publicly just yet.\r\n> \r\n> In any case regarding your dataset at the moment its seems from looking at the csv files, its mixture of textual and numerical data, sometimes in the same column etc. As you know, for time series models we would need just numeric data so I would need your help in disambiguating the dataset you have collected and also perhaps starting with just numerical data to start with...\r\n> \r\n> Do you think you can make a version with just numerical data?\r\n\r\nWill share the numeric data and conversion script within end of this week. \r\n\r\nI am on a business trip currently - it is in my desktop."],"created_at":1649224018000,"updated_at":1649668030000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Time Series Dataset\r\n- **Name:** 2min ticker data for stock market \r\n- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image\r\n- **Data:** Collected by myself from investing.com\r\n- **Motivation:** Test applicability of transformer based model on stock market \/ time series problem\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/45640029\/161904077-52fe97cb-3720-4e3f-98ee-7f6720a056e2.png)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4104\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4104\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4103","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4103\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4103\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4103\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4103","id":1193987104,"node_id":"PR_kwDODunzps41s3T4","number":4103,"title":"Add the `GSM8K` dataset","user":{"login":"jon-tow","id":41410219,"node_id":"MDQ6VXNlcjQxNDEwMjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41410219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jon-tow","html_url":"https:\/\/github.com\/jon-tow","followers_url":"https:\/\/api.github.com\/users\/jon-tow\/followers","following_url":"https:\/\/api.github.com\/users\/jon-tow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jon-tow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jon-tow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jon-tow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jon-tow\/orgs","repos_url":"https:\/\/api.github.com\/users\/jon-tow\/repos","events_url":"https:\/\/api.github.com\/users\/jon-tow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jon-tow\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","The CI is failing because it's outdated, but the task tags are updated on `master`, merging :)"],"created_at":1649218072000,"updated_at":1649777908000,"closed_at":1649758876000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4103\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4103\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4103","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4103","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4103.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4103.patch","merged_at":1649758876000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4102","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4102\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4102\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4102\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4102","id":1193616722,"node_id":"PR_kwDODunzps41roGx","number":4102,"title":"[hub] Fix `api.create_repo` call?","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4102). All of your documentation changes will be reflected on that endpoint.","Closing in favor of https:\/\/github.com\/huggingface\/datasets\/pull\/4106"],"created_at":1649186512000,"updated_at":1649752906000,"closed_at":1649752906000,"author_association":"MEMBER","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4102\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4102\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4102","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4102","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4102.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4102.patch","merged_at":null},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4101","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4101\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4101\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4101\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4101","id":1193399204,"node_id":"I_kwDODunzps5HIdOk","number":4101,"title":"How can I download only the train and test split for full numbers using load_dataset()? ","user":{"login":"Nakkhatra","id":64383902,"node_id":"MDQ6VXNlcjY0MzgzOTAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/64383902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nakkhatra","html_url":"https:\/\/github.com\/Nakkhatra","followers_url":"https:\/\/api.github.com\/users\/Nakkhatra\/followers","following_url":"https:\/\/api.github.com\/users\/Nakkhatra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nakkhatra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nakkhatra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nakkhatra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nakkhatra\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nakkhatra\/repos","events_url":"https:\/\/api.github.com\/users\/Nakkhatra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nakkhatra\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Can you please specify the full name of the dataset? IIRC `full_numbers` is one of the configs of the `svhn` dataset, and its generation is slow due to data being stored in binary Matlab files. Even if you specify a specific split, `datasets` downloads all of them, but we plan to fix that soon and only download the requested split.\r\n\r\nIf you are in a hurry, download the `svhn` script [here](`https:\/\/huggingface.co\/datasets\/svhn\/blob\/main\/svhn.py`), remove [this code](https:\/\/huggingface.co\/datasets\/svhn\/blob\/main\/svhn.py#L155-L162), and run:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path\/to\/your\/local\/script.py\", \"full_numbers\")\r\n```\r\n\r\nAnd to make loading easier in Colab, you can create a dataset repo on the Hub and upload the script there. Or push the script to Google Drive and mount the drive in Colab."],"created_at":1649174415000,"updated_at":1649250541000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"body":"How can I download only the train and test split for full numbers using load_dataset()? \r\n\r\nI do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4101\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4101\/timeline","performed_via_github_app":null,"state_reason":null,"draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4100","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4100\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4100\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4100\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4100","id":1193393959,"node_id":"PR_kwDODunzps41q4ce","number":4100,"title":"Improve RedCaps dataset card","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","I find this preprocessing a bit too specific to add it as a method to `datasets` as it's only useful in the context of CV (and we support multiple modalities). However, I agree it would be great to move this code to another lib to avoid code duplication. Maybe we should create a package with preprocessing functions\/transforms for this purpose?"],"created_at":1649174234000,"updated_at":1649858934000,"closed_at":1649858546000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR modifies the RedCaps card to:\r\n* fix the formatting of the Point of Contact fields on the Hub\r\n* speed up the image fetching logic (aligns it with the [img2dataset](https:\/\/github.com\/rom1504\/img2dataset) tool) and make it more robust (return None if **any** exception is thrown)","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4100\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4100\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4100","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4100","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4100.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4100.patch","merged_at":1649858546000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4099","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4099\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4099\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4099\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4099","id":1193253768,"node_id":"I_kwDODunzps5HH5uI","number":4099,"title":"UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)","user":{"login":"andreybond","id":20210017,"node_id":"MDQ6VXNlcjIwMjEwMDE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20210017?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/andreybond","html_url":"https:\/\/github.com\/andreybond","followers_url":"https:\/\/api.github.com\/users\/andreybond\/followers","following_url":"https:\/\/api.github.com\/users\/andreybond\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/andreybond\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/andreybond\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/andreybond\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/andreybond\/orgs","repos_url":"https:\/\/api.github.com\/users\/andreybond\/repos","events_url":"https:\/\/api.github.com\/users\/andreybond\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/andreybond\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr\/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 194\r\n })\r\n validation: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 71\r\n })\r\n})\r\n```\r\n\r\nThe only reason I can imagine this issue may arise is if your default encoding is not \"UTF-8\" (and it is ASCII instead). This is usually the case on Windows machines; but you say your environment is a Linux machine. Maybe you change your machine default encoding?\r\n\r\nCould you please check this?\r\n```python\r\nIn [6]: import sys\r\n\r\nIn [7]: sys.getdefaultencoding()\r\nOut[7]: 'utf-8'\r\n```","I opened a PR in the original dataset loading script:\r\n- microsoft\/unilm#677\r\n\r\nand fixed the corresponding dataset script on the Hub:\r\n- https:\/\/huggingface.co\/datasets\/nielsr\/XFUN\/commit\/73ba5e026621e05fb756ae0f267eb49971f70ebd","import sys\r\nsys.getdefaultencoding()\r\n\r\nreturned: 'utf-8'\r\n\r\n---------------------\r\n\r\nI've just cloned master branch - your fix works! Thank you!"],"created_at":1649169758000,"updated_at":1649227064000,"closed_at":1649226954000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nError \"UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)\" is thrown when downloading dataset.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset \r\ndatasets = load_dataset(\"nielsr\/XFUN\", \"xfun.ja\")\r\n```\r\n\r\n## Expected results\r\nDataset should be downloaded without exceptions\r\n\r\n## Actual results\r\nStack trace (for the second-time execution):\r\nDownloading and preparing dataset xfun\/xfun.ja to \/root\/.cache\/huggingface\/datasets\/nielsr___xfun\/xfun.ja\/0.0.0\/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477...\r\nDownloading data files: 100%\r\n2\/2 [00:00<00:00, 88.48it\/s]\r\nExtracting data files: 100%\r\n2\/2 [00:00<00:00, 79.60it\/s]\r\n\r\nUnicodeDecodeErrorTraceback (most recent call last)\r\n in \r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 datasets = load_dataset(\"nielsr\/XFUN\", \"xfun.ja\")\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 604 )\r\n 605 \r\n--> 606 # By default, return all splits\r\n 607 if split is None:\r\n 608 split = {s: s for s in self.info.splits}\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 692 Args:\r\n 693 split: `datasets.Split` which subset of the data to read.\r\n--> 694 \r\n 695 Returns:\r\n 696 `Dataset`\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/tqdm\/notebook.py in __iter__(self)\r\n 252 if not self.disable:\r\n 253 self.display(check_delay=False)\r\n--> 254 \r\n 255 def __iter__(self):\r\n 256 try:\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/tqdm\/std.py in __iter__(self)\r\n 1183 for obj in iterable:\r\n 1184 yield obj\r\n-> 1185 return\r\n 1186 \r\n 1187 mininterval = self.mininterval\r\n\r\n~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/nielsr--XFUN\/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477\/XFUN.py in _generate_examples(self, filepaths)\r\n 140 logger.info(\"Generating examples from = %s\", filepath)\r\n 141 with open(filepath[0], \"r\") as f:\r\n--> 142 data = json.load(f)\r\n 143 \r\n 144 for doc in data[\"documents\"]:\r\n\r\n\/usr\/lib\/python3.6\/json\/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\r\n 294 \r\n 295 \"\"\"\r\n--> 296 return loads(fp.read(),\r\n 297 cls=cls, object_hook=object_hook,\r\n 298 parse_float=parse_float, parse_int=parse_int,\r\n\r\n\/usr\/lib\/python3.6\/encodings\/ascii.py in decode(self, input, final)\r\n 24 class IncrementalDecoder(codecs.IncrementalDecoder):\r\n 25 def decode(self, input, final=False):\r\n---> 26 return codecs.ascii_decode(input, self.errors)[0]\r\n 27 \r\n 28 class StreamWriter(Codec,codecs.StreamWriter):\r\n\r\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)\r\n\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.0.0 (but reproduced with many previous versions)\r\n- Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU\/Linux ; Base docker image is : huggingface\/transformers-pytorch-cpu\r\n- Python version: 3.6.9\r\n- PyArrow version: 6.0.1\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4099\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4099\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4098","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4098\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4098\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4098\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4098","id":1193245522,"node_id":"PR_kwDODunzps41qXjo","number":4098,"title":"Proposing WikiSplit metric card","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","A quick Github tip ;) To avoid running N times the CI, you can push all the changes at once: go to Files Changed tab, and on each suggestion there's a \"add to commit batch\" and then you can do one commit for all the suggestions you want to approve ;)"],"created_at":1649169394000,"updated_at":1649173717000,"closed_at":1649173348000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile:","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4098\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4098\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4098","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4098","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4098.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4098.patch","merged_at":1649173348000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4097","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4097\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4097\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4097\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4097","id":1193205751,"node_id":"PR_kwDODunzps41qPEu","number":4097,"title":"Updating FrugalScore metric card","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649167764000,"updated_at":1649171255000,"closed_at":1649170906000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"removing duplicate paragraph","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4097\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4097\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4097","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4097","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4097.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4097.patch","merged_at":1649170906000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4096","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4096\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4096\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4096\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4096","id":1193165229,"node_id":"I_kwDODunzps5HHkGt","number":4096,"title":"Add support for streaming Zarr stores for hosted datasets","user":{"login":"jacobbieker","id":7170359,"node_id":"MDQ6VXNlcjcxNzAzNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7170359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jacobbieker","html_url":"https:\/\/github.com\/jacobbieker","followers_url":"https:\/\/api.github.com\/users\/jacobbieker\/followers","following_url":"https:\/\/api.github.com\/users\/jacobbieker\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jacobbieker\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jacobbieker\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jacobbieker\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jacobbieker\/orgs","repos_url":"https:\/\/api.github.com\/users\/jacobbieker\/repos","events_url":"https:\/\/api.github.com\/users\/jacobbieker\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jacobbieker\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @jacobbieker, thanks for your request and study of possible alternatives.\r\n\r\nWe are very interested in finding a way to make `datasets` useful to you.\r\n\r\nLooking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https:\/\/zarr.readthedocs.io\/en\/stable\/api\/storage.html#zarr.storage.ZipStore\r\n\r\nThis might be convenient for many reasons:\r\n- On the one hand, we avoid the Git issue with huge number of small files: chunks files are compressed into a single ZIP file\r\n- On the other hand, the ZIP file format is specially suited for streaming data because it allows random access to its component files (i.e. it supports random access to its chunks)\r\n\r\nAnyway, I think that a Python loading script will be necessary: you need to implement additional logic to select certain chunks (based on date or other criteria).\r\n\r\nPlease, let me know if this makes sense to you.","Ah okay, I missed the option of zip files for zarr, I'll try that with our repos and see if it works! Thanks a lot!","Hi @jacobbieker, does the Zarr ZipStore work for your use case?","Hi,\r\n\r\nYes, it seems to! I got it working for https:\/\/huggingface.co\/datasets\/openclimatefix\/mrms thanks for the help! ","On behalf of the Zarr developers, let me say THANK YOU for working to support Zarr on HF! \ud83d\ude4f Zarr is a 100% open-source and community driven project (fiscally sponsored by NumFocus). We see it as an ideal format for ML training datasets, particularly in scientific domains.\r\n\r\nI think the solution of zipping the Zarr store is a reasonable way to balance the constraints of Git LFS with the structure of Zarr.\r\n\r\nIt would be amazing to get something on the [Hugging Face Datasets Docs](https:\/\/huggingface.co\/docs\/datasets\/index) about how to best work with Zarr. Let me know if there's a way I could help with that effort.","Also just noting here that I was able to lazily open @jacobbieker's dataset over the internet from HF hub \ud83d\ude80 !\r\n\r\n```python\r\nimport xarray as xr\r\nurl = \"https:\/\/huggingface.co\/datasets\/openclimatefix\/mrms\/resolve\/main\/data\/2016_001.zarr.zip\"\r\nzip_url = 'zip:\/\/\/::' + url\r\nds = xr.open_dataset(zip_url, engine='zarr', chunks={})\r\n```\r\n\r\n\"image\"\r\n","However, I wasn't able to get streaming working using the Datasets api:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"openclimatefix\/mrms\", streaming=True, split='train')\r\nitem = next(iter(ds))\r\n```\r\n\r\n
\r\nFileNotFoundError traceback<\/summary>\r\n\r\n```\r\nNo config specified, defaulting to: mrms\/2021\r\nzip:\/\/::https:\/\/huggingface.co\/datasets\/openclimatefix\/mrms\/resolve\/main\/data\/2016_001.zarr.zip\r\ndata\/2016_001.zarr.zip\r\nzip:\/\/2016_001.zarr.zip::https:\/\/huggingface.co\/datasets\/openclimatefix\/mrms\/resolve\/main\/data\/2016_001.zarr.zip\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [1], in ()\r\n 1 from datasets import load_dataset\r\n 2 ds = load_dataset(\"openclimatefix\/mrms\", streaming=True, split='train')\r\n----> 3 item = next(iter(ds))\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py:497, in IterableDataset.__iter__(self)\r\n 496 def __iter__(self):\r\n--> 497 for key, example in self._iter():\r\n 498 if self.features:\r\n 499 # we encode the example for ClassLabel feature types for example\r\n 500 encoded_example = self.features.encode_example(example)\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py:494, in IterableDataset._iter(self)\r\n 492 else:\r\n 493 ex_iterable = self._ex_iterable\r\n--> 494 yield from ex_iterable\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/datasets\/iterable_dataset.py:87, in ExamplesIterable.__iter__(self)\r\n 86 def __iter__(self):\r\n---> 87 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/openclimatefix--mrms\/2a6f697014d7eb3caf586ca137d47ca38785ae2fe36248611b021f8248b59936\/mrms.py:150, in MRMS._generate_examples(self, filepath, split)\r\n 147 filepath = \"[https:\/\/huggingface.co\/datasets\/openclimatefix\/mrms\/resolve\/main\/data\/2016_001.zarr.zip](https:\/\/huggingface.co\/datasets\/openclimatefix\/mrms\/resolve\/main\/data\/2016_001.zarr.zip%3C\/span%3E%3Cspan) style=\"color:rgb(175,0,0)\">\"\r\n 148 # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.\r\n 149 # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.\r\n--> 150 with zarr.storage.FSStore(fsspec.open(\"zip::\" + filepath, mode='r'), mode='r') as store:\r\n 151 data = xr.open_zarr(store)\r\n 152 for key, row in enumerate(data[\"time\"].values):\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/zarr\/storage.py:1120, in FSStore.__init__(self, url, normalize_keys, key_separator, mode, exceptions, dimension_separator, **storage_options)\r\n 1117 import fsspec\r\n 1118 self.normalize_keys = normalize_keys\r\n-> 1120 protocol, _ = fsspec.core.split_protocol(url)\r\n 1121 # set auto_mkdir to True for local file system\r\n 1122 if protocol in (None, \"file\") and not storage_options.get(\"auto_mkdir\"):\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/fsspec\/core.py:514, in split_protocol(urlpath)\r\n 512 def split_protocol(urlpath):\r\n 513 \"\"\"Return protocol, path pair\"\"\"\r\n--> 514 urlpath = stringify_path(urlpath)\r\n 515 if \":\/\/\" in urlpath:\r\n 516 protocol, path = urlpath.split(\":\/\/\", 1)\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/fsspec\/utils.py:315, in stringify_path(filepath)\r\n 313 return filepath\r\n 314 elif hasattr(filepath, \"__fspath__\"):\r\n--> 315 return filepath.__fspath__()\r\n 316 elif isinstance(filepath, pathlib.Path):\r\n 317 return str(filepath)\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/fsspec\/core.py:98, in OpenFile.__fspath__(self)\r\n 96 def __fspath__(self):\r\n 97 # may raise if cannot be resolved to local file\r\n---> 98 return self.open().__fspath__()\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/fsspec\/core.py:140, in OpenFile.open(self)\r\n 132 def open(self):\r\n 133 \"\"\"Materialise this as a real open file without context\r\n 134 \r\n 135 The file should be explicitly closed to avoid enclosed file\r\n (...)\r\n 138 been deleted; but a with-context is better style.\r\n 139 \"\"\"\r\n--> 140 out = self.__enter__()\r\n 141 closer = out.close\r\n 142 fobjects = self.fobjects.copy()[:-1]\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/fsspec\/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/fsspec\/spec.py:1009, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1007 else:\r\n 1008 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1009 f = self._open(\r\n 1010 path,\r\n 1011 mode=mode,\r\n 1012 block_size=block_size,\r\n 1013 autocommit=ac,\r\n 1014 cache_options=cache_options,\r\n 1015 **kwargs,\r\n 1016 )\r\n 1017 if compression is not None:\r\n 1018 from fsspec.compression import compr\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/fsspec\/implementations\/zip.py:96, in ZipFileSystem._open(self, path, mode, block_size, autocommit, cache_options, **kwargs)\r\n 94 if mode != \"rb\":\r\n 95 raise NotImplementedError\r\n---> 96 info = self.info(path)\r\n 97 out = self.zip.open(path, \"r\")\r\n 98 out.size = info[\"size\"]\r\n\r\nFile \/opt\/miniconda3\/envs\/hugginface\/lib\/python3.9\/site-packages\/fsspec\/archive.py:42, in AbstractArchiveFileSystem.info(self, path, **kwargs)\r\n 40 return self.dir_cache[path + \"\/\"]\r\n 41 else:\r\n---> 42 raise FileNotFoundError(path)\r\n\r\nFileNotFoundError:\r\n```\r\n\r\n<\/details>\r\n\r\nIs this a bug? Or am I just doing it wrong...","I'm still messing around with that dataset, so the data might have moved. I currently have each year of MRMS precipitation rate data as it's own zarr, but as they are quite large (on order of 100GB each) I'm working to split them into single days, and as such they are still being moved around, I was just trying to get a proof of concept working originally. ","I've mostly finished rearranging the data now and uploading some more, so this works now:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset(\"openclimatefix\/mrms\", streaming=True, split=\"train\")\r\nitem = next(iter(ds))\r\nprint(item.keys())\r\nprint(item[\"timestamp\"])\r\n```\r\n\r\nThe MRMS data now goes most of 2016-2022, with quite a few gaps I'm working on filling in"],"created_at":1649165912000,"updated_at":1650873852000,"closed_at":1650528778000,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nLots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https:\/\/github.com\/huggingface\/datasets\/issues\/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https:\/\/huggingface.co\/datasets\/openclimatefix\/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https:\/\/huggingface.co\/datasets\/openclimatefix\/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily. \r\n\r\n**Describe the solution you'd like**\r\nA way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec. \r\n\r\n**Describe alternatives you've considered**\r\nTarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users.\r\nPre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4096\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4096\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4095","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4095\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4095\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4095\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4095","id":1192573353,"node_id":"PR_kwDODunzps41oIFI","number":4095,"title":"fix typo in rename_column error message","user":{"login":"hunterlang","id":680821,"node_id":"MDQ6VXNlcjY4MDgyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/680821?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hunterlang","html_url":"https:\/\/github.com\/hunterlang","followers_url":"https:\/\/api.github.com\/users\/hunterlang\/followers","following_url":"https:\/\/api.github.com\/users\/hunterlang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hunterlang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hunterlang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hunterlang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hunterlang\/orgs","repos_url":"https:\/\/api.github.com\/users\/hunterlang\/repos","events_url":"https:\/\/api.github.com\/users\/hunterlang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hunterlang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The docs for this PR live [here](https:\/\/moon-ci-docs.huggingface.co\/docs\/datasets\/pr_4095). All of your documentation changes will be reflected on that endpoint."],"created_at":1649130956000,"updated_at":1649148886000,"closed_at":1649148353000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"I feel bad submitting such a tiny change as a PR but it confused me today \ud83d\ude04 ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4095\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4095\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4095","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4095","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4095.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4095.patch","merged_at":1649148353000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4094","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4094\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4094\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4094\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4094","id":1192534414,"node_id":"I_kwDODunzps5HFKGO","number":4094,"title":"Helo Mayfrends","user":{"login":"Budigming","id":102933353,"node_id":"U_kgDOBiKjaQ","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/102933353?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Budigming","html_url":"https:\/\/github.com\/Budigming","followers_url":"https:\/\/api.github.com\/users\/Budigming\/followers","following_url":"https:\/\/api.github.com\/users\/Budigming\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Budigming\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Budigming\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Budigming\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Budigming\/orgs","repos_url":"https:\/\/api.github.com\/users\/Budigming\/repos","events_url":"https:\/\/api.github.com\/users\/Budigming\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Budigming\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1649126577000,"updated_at":1649143002000,"closed_at":1649143002000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** *name of the dataset*\r\n- **Description:** *short description of the dataset (or link to social media or blog post)*\r\n- **Paper:** *link to the dataset paper if available*\r\n- **Data:** *link to the Github repository or current dataset location*\r\n- **Motivation:** *what are some good reasons to have this dataset*\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4094\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4094\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4093","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4093\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4093\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4093\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4093","id":1192523161,"node_id":"I_kwDODunzps5HFHWZ","number":4093,"title":"elena-soare\/crawled-ecommerce: missing dataset","user":{"login":"seevaratnam","id":17519354,"node_id":"MDQ6VXNlcjE3NTE5MzU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17519354?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/seevaratnam","html_url":"https:\/\/github.com\/seevaratnam","followers_url":"https:\/\/api.github.com\/users\/seevaratnam\/followers","following_url":"https:\/\/api.github.com\/users\/seevaratnam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/seevaratnam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/seevaratnam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/seevaratnam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/seevaratnam\/orgs","repos_url":"https:\/\/api.github.com\/users\/seevaratnam\/repos","events_url":"https:\/\/api.github.com\/users\/seevaratnam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/seevaratnam\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"assignees":[{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["It's a bug! Thanks for reporting, I'm looking at it.","By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer.","Fixed. See https:\/\/huggingface.co\/datasets\/elena-soare\/crawled-ecommerce\/viewer\/elena-soare--crawled-ecommerce\/train.\r\n\r\n\"Capture\r\n\r\nThanks for reporting!"],"created_at":1649125519000,"updated_at":1649756093000,"closed_at":1649756093000,"author_association":"NONE","active_lock_reason":null,"body":"elena-soare\/crawled-ecommerce\r\n\r\n**Link:** *link to the dataset viewer page*\r\n\r\n*short description of the issue*\r\n\r\nAm I the one who added this dataset ? Yes-No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4093\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4093\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4092","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4092\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4092\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4092\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4092","id":1192499903,"node_id":"PR_kwDODunzps41n40R","number":4092,"title":"Fix dataset `amazon_us_reviews` metadata - 4\/4\/2022","user":{"login":"trentonstrong","id":191985,"node_id":"MDQ6VXNlcjE5MTk4NQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/191985?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/trentonstrong","html_url":"https:\/\/github.com\/trentonstrong","followers_url":"https:\/\/api.github.com\/users\/trentonstrong\/followers","following_url":"https:\/\/api.github.com\/users\/trentonstrong\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/trentonstrong\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/trentonstrong\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/trentonstrong\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/trentonstrong\/orgs","repos_url":"https:\/\/api.github.com\/users\/trentonstrong\/repos","events_url":"https:\/\/api.github.com\/users\/trentonstrong\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/trentonstrong\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","cc: @albertvillanova just FYI"],"created_at":1649122785000,"updated_at":1649421341000,"closed_at":1649420971000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4092\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4092\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4092","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4092","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4092.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4092.patch","merged_at":1649420970000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4091","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4091\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4091\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4091\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4091","id":1192023855,"node_id":"I_kwDODunzps5HDNcv","number":4091,"title":"Build a Dataset One Example at a Time Without Loading All Data Into Memory","user":{"login":"aravind-tonita","id":99340348,"node_id":"U_kgDOBevQPA","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/99340348?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aravind-tonita","html_url":"https:\/\/github.com\/aravind-tonita","followers_url":"https:\/\/api.github.com\/users\/aravind-tonita\/followers","following_url":"https:\/\/api.github.com\/users\/aravind-tonita\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aravind-tonita\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aravind-tonita\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aravind-tonita\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aravind-tonita\/orgs","repos_url":"https:\/\/api.github.com\/users\/aravind-tonita\/repos","events_url":"https:\/\/api.github.com\/users\/aravind-tonita\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aravind-tonita\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:\r\n* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)\r\n* storing the data in a JSON\/CSV\/Parquet\/TXT file and using `Dataset.from_{format}`\r\n* using `add_item` + `save_to_disk` on smaller chunks: \r\n ```python\r\n from datasets import Dataset, concatenate_datasets\r\n MAX_SAMPLES_IN_MEMORY = 1000\r\n samples_in_dset = 0\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n path_to_save_dir = \"path\/to\/save\/dir\"\r\n num_chunks = 0\r\n for example_dict in custom_example_dict_streamer(\"\/path\/to\/raw\/data\"):\r\n dset = dset.add_item(example_dict)\r\n samples_in_dset += 1\r\n if samples_in_dset == MAX_SAMPLES_IN_MEMORY:\r\n samples_in_dset = 0\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n if samples_in_dset > 0:\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n loaded_dsets = [] # memory-mapped\r\n for chunk_num in range(num_chunks):\r\n dset = Dataset.load_from_disk(f\"{path_to_save_dir}{chunk_num}\") \r\n loaded_dsets.append(dset)\r\n final_dset = concatenate_datasets(dset)\r\n ```\r\n If you still have issues with this approach, you can try to delete unused datasets with `gc.collect()` to free some memory. ","This is really elegant, thank you @mariosasko! I will try this."],"created_at":1649089164000,"updated_at":1650465060000,"closed_at":1650465060000,"author_association":"NONE","active_lock_reason":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nI have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.**\r\n\r\n**Describe the solution you'd like**\r\nI would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand.\r\n```\r\n\r\n# Initialize an empty Dataset, possibly from a known schema.\r\ndataset = Dataset()\r\n\r\n# Read in examples one by one using a custom data streamer.\r\nfor example_dict in custom_example_dict_streamer(\"\/path\/to\/raw\/data\"):\r\n\r\n # Add this example to the dict but do not store it in memory.\r\n dataset.add_item(example_dict)\r\n\r\n# Save the final dataset to disk as an Arrow-backed dataset.\r\ndataset.save_to_disk(\"\/path\/to\/dataset\")\r\n\r\n...\r\n\r\n# I'd like to be able to later `load_from_disk` and use the loaded Dataset\r\n# just like any other memory-mapped pyarrow-backed HuggingFace dataset...\r\nloaded_dataset = Dataset.load_from_disk(\"\/path\/to\/dataset\")\r\nloaded_dataset.set_format(type=\"torch\", columnns=[\"foo\", \"bar\", \"baz\"])\r\ndataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16)\r\n...\r\n\r\n```\r\n\r\n**Describe alternatives you've considered**\r\nI initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV\/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping.\r\n\r\nDo you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4091\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4091\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4090","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4090\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4090\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4090\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4090","id":1191956734,"node_id":"PR_kwDODunzps41mEs5","number":4090,"title":"Avoid writing empty license files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649085817000,"updated_at":1649335605000,"closed_at":1649335243000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR avoids the creation of empty `LICENSE` files.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4090\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4090\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4090","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4090","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4090.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4090.patch","merged_at":1649335243000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4089","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4089\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4089\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4089\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4089","id":1191915196,"node_id":"PR_kwDODunzps41l7yd","number":4089,"title":"Create metric card for Frugal Score","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649084029000,"updated_at":1649168086000,"closed_at":1649167610000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Proposing metric card for Frugal Score.\r\n\r\n@albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know!","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4089\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4089\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4089","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4089","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4089.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4089.patch","merged_at":1649167610000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4088","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4088\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4088\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4088\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4088","id":1191901172,"node_id":"PR_kwDODunzps41l4yE","number":4088,"title":"Remove unused legacy Beam utils","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649083431000,"updated_at":1649172207000,"closed_at":1649171861000,"author_association":"MEMBER","active_lock_reason":null,"body":"This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0: \r\n- Patch PR: https:\/\/github.com\/apache\/beam\/pull\/11699\r\n- Issue: https:\/\/issues.apache.org\/jira\/browse\/BEAM-10022\r\n\r\nIn relation with:\r\n- #204","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4088\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4088\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4088","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4088","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4088.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4088.patch","merged_at":1649171861000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4087","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4087\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4087\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4087\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4087","id":1191819805,"node_id":"PR_kwDODunzps41lnfO","number":4087,"title":"Fix BeamWriter output Parquet file","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1649080010000,"updated_at":1649170840000,"closed_at":1649170488000,"author_association":"MEMBER","active_lock_reason":null,"body":"Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files.\r\n\r\nThis PR:\r\n- writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in a smaller output file size.\r\n- fixes `parquet_to_arrow` function","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4087\/reactions","total_count":1,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":1,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4087\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4087","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4087","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4087.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4087.patch","merged_at":1649170488000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4086","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4086\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4086\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4086\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4086","id":1191373374,"node_id":"I_kwDODunzps5HAuo-","number":4086,"title":"Dataset viewer issue for McGill-NLP\/feedbackQA","user":{"login":"cslizc","id":54827718,"node_id":"MDQ6VXNlcjU0ODI3NzE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54827718?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cslizc","html_url":"https:\/\/github.com\/cslizc","followers_url":"https:\/\/api.github.com\/users\/cslizc\/followers","following_url":"https:\/\/api.github.com\/users\/cslizc\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cslizc\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cslizc\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cslizc\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cslizc\/orgs","repos_url":"https:\/\/api.github.com\/users\/cslizc\/repos","events_url":"https:\/\/api.github.com\/users\/cslizc\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cslizc\/received_events","type":"User","site_admin":false},"labels":[{"id":3470211881,"node_id":"LA_kwDODunzps7O1zsp","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset-viewer","name":"dataset-viewer","color":"E5583E","default":false,"description":"Related to the dataset viewer on huggingface.co"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @cslizc, thanks for reporting.\r\n\r\nI have just forced the refresh of the corresponding cache and the preview is working now.","thank you so much"],"created_at":1649057240000,"updated_at":1649111393000,"closed_at":1649059305000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for '*McGill-NLP\/feedbackQA*'\r\n\r\n**Link:** *[link to the dataset viewer page](https:\/\/huggingface.co\/datasets\/McGill-NLP\/feedbackQA)*\r\n\r\n*short description of the issue*\r\nThe dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:\r\n\r\n```\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: Not found. Maybe the cache is missing, or maybe the dataset does not exist.\r\n```\r\n\r\nAm I the one who added this dataset ? Yes\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4086\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4086\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4085","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4085\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4085\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4085\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4085","id":1190621345,"node_id":"I_kwDODunzps5G93Ch","number":4085,"title":"datasets.set_progress_bar_enabled(False) not working in datasets v2","user":{"login":"virilo","id":3381112,"node_id":"MDQ6VXNlcjMzODExMTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3381112?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/virilo","html_url":"https:\/\/github.com\/virilo","followers_url":"https:\/\/api.github.com\/users\/virilo\/followers","following_url":"https:\/\/api.github.com\/users\/virilo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/virilo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/virilo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/virilo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/virilo\/orgs","repos_url":"https:\/\/api.github.com\/users\/virilo\/repos","events_url":"https:\/\/api.github.com\/users\/virilo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/virilo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted","Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update your code to use `datasets.logging.disable_progress_bar`.\r\n\r\nYou have more info in our docs: [Logging methods](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/logging_methods)","One important thing for beginner like me is: from datasets.utils.logging import disable_progress_bar\r\nDo not forget the 'utils' or you will waste a long time like me...."],"created_at":1648903210000,"updated_at":1663381083000,"closed_at":1649054674000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\n\r\ndatasets.set_progress_bar_enabled(False) not working in datasets v2\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndatasets.set_progress_bar_enabled(False)\r\n```\r\n\r\n## Expected results\r\ndatasets not using any progress bar\r\n\r\n## Actual results\r\n\r\nAttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled\r\n\r\n## Environment info\r\n\r\ndatasets version 2\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4085\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4085\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4084","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4084\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4084\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4084\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4084","id":1190060415,"node_id":"I_kwDODunzps5G7uF_","number":4084,"title":"Errors in `Train with Datasets` Tensorflow code section on Huggingface.co","user":{"login":"blackhat-coder","id":57095771,"node_id":"MDQ6VXNlcjU3MDk1Nzcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57095771?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/blackhat-coder","html_url":"https:\/\/github.com\/blackhat-coder","followers_url":"https:\/\/api.github.com\/users\/blackhat-coder\/followers","following_url":"https:\/\/api.github.com\/users\/blackhat-coder\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/blackhat-coder\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/blackhat-coder\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/blackhat-coder\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/blackhat-coder\/orgs","repos_url":"https:\/\/api.github.com\/users\/blackhat-coder\/repos","events_url":"https:\/\/api.github.com\/users\/blackhat-coder\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/blackhat-coder\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @blackhat-coder, thanks for reporting.\r\n\r\nPlease note that the `transformers` library updated their data collators API last year (version 4.10.0):\r\n- huggingface\/transformers#13105\r\n\r\nnow requiring to pass `return_tensors` argument at Data Collator instantiation.\r\n\r\nAnd therefore, we also updated in the `datasets` library documentation all the examples using `transformers` data collators.\r\n\r\nIf you would like to follow our examples, please update your installed `transformers` version:\r\n```\r\npip install -U transformers\r\n```"],"created_at":1648832567000,"updated_at":1649057077000,"closed_at":1649056891000,"author_association":"NONE","active_lock_reason":null,"body":"## Describe the bug\r\nHi\r\n### Error 1\r\nRunning the Tensforlow code on [Huggingface](https:\/\/huggingface.co\/docs\/datasets\/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors' \r\n### Error 2\r\n`DataCollatorWithPadding` isn't imported\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport tensorflow as tf\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-cased')\r\ndataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True)\r\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"tf\")\r\ntrain_dataset = dataset[\"train\"].to_tf_dataset(\r\n columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'],\r\n shuffle=True,\r\n batch_size=16,\r\n collate_fn=data_collator,\r\n)\r\n```\r\nThis is the same code on Huggingface.co\r\n\r\n## Actual results\r\nTypeError: __init__() got an unexpected keyword argument 'return_tensors'\r\n\r\n## Environment info\r\n- `datasets` version: 2.0.0\r\n- Platform: Windows-10-10.0.19044-SP0\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.0\r\n- Pandas version: 1.4.1\r\n> ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4084\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4084\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4083","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4083\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4083\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4083\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4083","id":1190025878,"node_id":"PR_kwDODunzps41gEbu","number":4083,"title":"Add SacreBLEU Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1648830296000,"updated_at":1649796300000,"closed_at":1649795920000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4083\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4083\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4083","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4083","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4083.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4083.patch","merged_at":1649795920000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4082","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4082\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4082\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4082\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4082","id":1189965845,"node_id":"PR_kwDODunzps41f3fb","number":4082,"title":"Add chrF(++) Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1648827132000,"updated_at":1649796235000,"closed_at":1649795886000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":null,"reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4082\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4082\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4082","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4082","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4082.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4082.patch","merged_at":1649795886000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4081","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4081\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4081\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4081\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4081","id":1189916472,"node_id":"PR_kwDODunzps41fsxW","number":4081,"title":"Close parquet writer properly in `push_to_hub`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._","@lhoestq \/ @albertvillanova \/ @mariosasko \r\nI am facing the same scenario. Let me explain the situation point. I have a glue ETL job\r\n\r\n1--> My files are in parquet format and stored in AWS s3.\r\n2--> I am iterating a loop for a data set where the same file name can occur with diffrent other data.\r\n3--> I read the parquet and saved it in a pandas data frame.\r\n4--> Done some operation on that data frame\r\n5--> upload the updated data frame into the S3 parquet file. Below are code snippet what I am using to save the updated \r\n data frame into parquet format and load into S3\r\n `header_name_column_list = dict(data_frame)\r\n header_list = []\r\n for col_id, col_type in header_name_column_list.items():\r\n header_list.append(pyarrow.field(col_id, pyarrow.string()))\r\n table_schema = pyarrow.schema(header_list)\r\n table = pyarrow.Table.from_pandas(data_frame, schema=table_schema, preserve_index=False)\r\n writer = parquet.ParquetWriter(b_buffer, table.schema)\r\n writer.write_table(table)\r\n writer.close()\r\n b_buffer.seek(0)\r\n .....\r\n ....\r\n self.s3_client.upload_fileobj(\r\n b_buffer,\r\n self.bucket,\r\n file_key,\r\n ExtraArgs=extra_args)`\r\n\r\nBut when I executed the glue etl job, the first time it works properly and but in the next iteration, when I try to open the same file got that error.\r\n\r\n\r\n\r\n\r\n\r\n\r\nINFO:Iot-dsip-de-duplication-job:Dataframe uploaded: s3:\/\/abc\/2022\/07\/12\/file1_ft_20220714122108.3065_12345.parquet INFO:Iot-dsip-de-duplication-job:Sleep for 60 sec\r\nINFO:Iot-dsip-de-duplication-job:start after sleep\r\n.......................\r\n..........................\r\n..........................\r\nERROR:Iot-dsip-de-duplication-job:Failed to read data from parquet file s3:\/\/abc\/2022\/07\/12\/file1_ft_20220714122108.3065_12345.parquet, error is : Invalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.INFO:Iot-dsip-de-duplication-job:Empty dataframe found\r\n\r\n\r\n<\/body>\r\n<\/html>\r\n\r\nAny clue will be really helpful..I got stuck with this problem."],"created_at":1648825130000,"updated_at":1657826526000,"closed_at":1648829779000,"author_association":"MEMBER","active_lock_reason":null,"body":"We don\u2019t call writer.close(), which causes https:\/\/github.com\/huggingface\/datasets\/issues\/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer.\r\n\r\nI fixed this by explicitly closing the parquet writer.\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/4077.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4081\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4081\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4081","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4081","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4081.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4081.patch","merged_at":1648829779000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4080","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4080\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4080\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4080\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4080","id":1189667296,"node_id":"I_kwDODunzps5G6OHg","number":4080,"title":"NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"},{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @richarddwang,\r\n\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https:\/\/github.com\/huggingface\/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists. \r\n\r\nDuplicate of:\r\n- #4031"],"created_at":1648812868000,"updated_at":1648821550000,"closed_at":1648821550000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Steps to reproduce the bug\r\n```python\r\ndatasets.load_dataset(\"conll2012_ontonotesv5\", \"english_v12\")\r\n```\r\n\r\n## Actual results\r\n```\r\nDownloading builder script: 32.2kB [00:00, 9.72MB\/s] \r\nDownloading metadata: 20.0kB [00:00, 10.4MB\/s] \r\nDownloading and preparing dataset conll2012_ontonotesv5\/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size\r\n, total: 379.12 MiB) to ...\r\nTraceback (most recent call last): [315\/390]\r\n File \"\/home\/yisiang\/lgtn\/conll2012\/run.py\", line 86, in \r\n train() \r\n File \"\/home\/yisiang\/lgtn\/conll2012\/run.py\", line 65, in train \r\n trainer.fit(model, datamodule=dm) \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/pytorch_lightning\/trainer\/trainer.py\", line 740, in fit \r\n self._call_and_handle_interrupt( \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/pytorch_lightning\/trainer\/trainer.py\", line 685, in _call_and_handle_inte\r\nrrupt \r\n return trainer_fn(*args, **kwargs) \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/pytorch_lightning\/trainer\/trainer.py\", line 777, in _fit_impl \r\n self._run(model, ckpt_path=ckpt_path) \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/pytorch_lightning\/trainer\/trainer.py\", line 1131, in _run \r\n self._data_connector.prepare_data() \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/pytorch_lightning\/trainer\/connectors\/data_connector.py\", line 154, in pre\r\npare_data \r\n self.trainer.datamodule.prepare_data() \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/pytorch_lightning\/core\/datamodule.py\", line 474, in wrapped_fn \r\n fn(*args, **kwargs) \r\n File \"\/home\/yisiang\/lgtn\/_abstract_task\/data.py\", line 43, in prepare_data \r\n raw_dsets = datasets.load_dataset(**load_dataset_kwargs) \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1687, in load_dataset \r\n builder_instance.download_and_prepare( \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 605, in download_and_prepare \r\n self._download_and_prepare( \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1104, in _download_and_prepare \r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 676, in _download_and_prepare \r\n verify_checksums( \r\n File \"\/home\/yisiang\/miniconda3\/envs\/ai\/lib\/python3.9\/site-packages\/datasets\/utils\/info_utils.py\", line 40, in verify_checksums \r\n raise NonMatchingChecksumError(error_msg + str(bad_urls)) \r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: \r\n['https:\/\/md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com\/zmycy7t9h9-1.zip']\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 2.0.0","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4080\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4080\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4079","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4079\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4079\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4079\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4079","id":1189521576,"node_id":"PR_kwDODunzps41eYRC","number":4079,"title":"Increase max retries for GitHub datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1648805643000,"updated_at":1648827160000,"closed_at":1648826831000,"author_association":"MEMBER","active_lock_reason":null,"body":"As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics:\r\n- #4063\r\n\r\nNote that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub:\r\n- #4059\r\n\r\nFix #2048\r\n\r\nRelated to:\r\n- #4051 \r\n- #3210\r\n- #2787 \r\n- #2075\r\n- #2036\r\n\r\nCC: @lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4079\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4079\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4079","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4079","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4079.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4079.patch","merged_at":1648826830000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4078","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4078\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4078\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4078\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4078","id":1189513572,"node_id":"PR_kwDODunzps41eWnl","number":4078,"title":"Fix GithubMetricModuleFactory instantiation with None download_config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1648805218000,"updated_at":1648824291000,"closed_at":1648823967000,"author_association":"MEMBER","active_lock_reason":null,"body":"Recent PR:\r\n- #4063\r\n\r\nintroduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`.\r\n\r\nThis PR add instantiation tests and fix that potential issue.\r\n\r\nCC: @lhoestq ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4078\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4078\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4078","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4078","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4078.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4078.patch","merged_at":1648823967000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4077","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4077\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4077\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4077\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4077","id":1189467585,"node_id":"I_kwDODunzps5G5dXB","number":4077,"title":"ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1648802953000,"updated_at":1648829779000,"closed_at":1648829779000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"## Describe the bug\r\n\r\nWhen uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine.\r\n\r\nBasically, I do:\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\", data_files=\"path_to_my_files\")\r\n\r\ndataset.push_to_hub(\"dataset_name\") # works fine, no errors\r\n\r\nreloaded_dataset = load_dataset(\"dataset_name\")\r\n```\r\n\r\nand it returns:\r\n\r\n```\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```\r\n\r\nI created a Colab notebook to reproduce my error: https:\/\/colab.research.google.com\/drive\/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing\r\n\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4077\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4077\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4076","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4076\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4076\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4076\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4076","id":1188478867,"node_id":"PR_kwDODunzps41a1n2","number":4076,"title":"Add ROUGE Metric Card","user":{"login":"emibaylor","id":27527747,"node_id":"MDQ6VXNlcjI3NTI3NzQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27527747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emibaylor","html_url":"https:\/\/github.com\/emibaylor","followers_url":"https:\/\/api.github.com\/users\/emibaylor\/followers","following_url":"https:\/\/api.github.com\/users\/emibaylor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emibaylor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emibaylor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emibaylor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emibaylor\/orgs","repos_url":"https:\/\/api.github.com\/users\/emibaylor\/repos","events_url":"https:\/\/api.github.com\/users\/emibaylor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emibaylor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1648751674000,"updated_at":1649796225000,"closed_at":1649795858000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Add ROUGE metric card.\r\n\r\nI've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with human judgements). Any suggestions on which paper(s) to pull from would be helpful! :) ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4076\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4076\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4076","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4076","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4076.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4076.patch","merged_at":1649795858000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4075","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4075\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4075\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4075\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4075","id":1188462162,"node_id":"I_kwDODunzps5G1n5S","number":4075,"title":"Add CCAgT dataset","user":{"login":"johnnv1","id":20444345,"node_id":"MDQ6VXNlcjIwNDQ0MzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20444345?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/johnnv1","html_url":"https:\/\/github.com\/johnnv1","followers_url":"https:\/\/api.github.com\/users\/johnnv1\/followers","following_url":"https:\/\/api.github.com\/users\/johnnv1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/johnnv1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/johnnv1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/johnnv1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/johnnv1\/orgs","repos_url":"https:\/\/api.github.com\/users\/johnnv1\/repos","events_url":"https:\/\/api.github.com\/users\/johnnv1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/johnnv1\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":3608941089,"node_id":"LA_kwDODunzps7XHBIh","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/vision","name":"vision","color":"bfdadc","default":false,"description":"Vision datasets"}],"state":"closed","locked":false,"assignee":{"login":"johnnv1","id":20444345,"node_id":"MDQ6VXNlcjIwNDQ0MzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20444345?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/johnnv1","html_url":"https:\/\/github.com\/johnnv1","followers_url":"https:\/\/api.github.com\/users\/johnnv1\/followers","following_url":"https:\/\/api.github.com\/users\/johnnv1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/johnnv1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/johnnv1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/johnnv1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/johnnv1\/orgs","repos_url":"https:\/\/api.github.com\/users\/johnnv1\/repos","events_url":"https:\/\/api.github.com\/users\/johnnv1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/johnnv1\/received_events","type":"User","site_admin":false},"assignees":[{"login":"johnnv1","id":20444345,"node_id":"MDQ6VXNlcjIwNDQ0MzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20444345?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/johnnv1","html_url":"https:\/\/github.com\/johnnv1","followers_url":"https:\/\/api.github.com\/users\/johnnv1\/followers","following_url":"https:\/\/api.github.com\/users\/johnnv1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/johnnv1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/johnnv1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/johnnv1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/johnnv1\/orgs","repos_url":"https:\/\/api.github.com\/users\/johnnv1\/repos","events_url":"https:\/\/api.github.com\/users\/johnnv1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/johnnv1\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Awesome ! Let us know if you have questions or if we can help ;) I'm assigning you\r\n\r\nPS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.","HI, I was waiting to come out in the second version to do the implementation.\r\n\r\n- Paper: https:\/\/dx.doi.org\/10.2139\/ssrn.4126881\r\n- Data: [Data mendelay](http:\/\/doi.org\/10.17632\/wg4bpm33hj.2)","Nice ! \ud83d\ude80 ","The link of CCAgT dataset is: https:\/\/huggingface.co\/datasets\/lapix\/CCAgT"],"created_at":1648750828000,"updated_at":1657134222000,"closed_at":1657134222000,"author_association":"NONE","active_lock_reason":null,"body":"## Adding a Dataset\r\n- **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique\r\n- **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111\u03bcm\u00d70.111\u03bcm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR).\r\n- **Paper:** https:\/\/doi.org\/10.1109\/cbms49503.2020.00110\r\n- **Data:** https:\/\/arquivos.ufsc.br\/d\/373be2177a33426a9e6c\/ or https:\/\/drive.google.com\/drive\/u\/4\/folders\/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0\r\n- **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nHi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible ","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4075\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4075\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4074","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4074\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4074\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4074\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4074","id":1188449142,"node_id":"I_kwDODunzps5G1kt2","number":4074,"title":"Error in google\/xtreme_s dataset card","user":{"login":"wranai","id":1048544,"node_id":"MDQ6VXNlcjEwNDg1NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1048544?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wranai","html_url":"https:\/\/github.com\/wranai","followers_url":"https:\/\/api.github.com\/users\/wranai\/followers","following_url":"https:\/\/api.github.com\/users\/wranai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wranai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wranai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wranai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wranai\/orgs","repos_url":"https:\/\/api.github.com\/users\/wranai\/repos","events_url":"https:\/\/api.github.com\/users\/wranai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wranai\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"},{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @wranai, thanks for reporting.\r\n\r\nPlease note that the information about language families and groups is taken form the original paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https:\/\/arxiv.org\/abs\/2203.10752).\r\n\r\nIf that information is wrong, feel free to contact the paper's authors to suggest that correction.\r\n\r\nJust note that Hungarian language (contrary to their geographically surrounding neighbor languages) belongs to the Uralic (languages) family, together with (among others) Finnish, Estonian, some other languages in northern regions of Scandinavia..."],"created_at":1648750065000,"updated_at":1648800776000,"closed_at":1648800776000,"author_association":"NONE","active_lock_reason":null,"body":"**Link:** https:\/\/huggingface.co\/datasets\/google\/xtreme_s\r\n\r\nNot a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4074\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4074\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4073","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4073\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4073\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4073\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4073","id":1188364711,"node_id":"PR_kwDODunzps41adPA","number":4073,"title":"Create a metric card for Competition MATH","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1648745339000,"updated_at":1648839759000,"closed_at":1648839433000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Proposing metric card for Competition MATH","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4073\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4073\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4073","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4073","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4073.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4073.patch","merged_at":1648839432000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4072","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4072\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4072\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4072\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4072","id":1188266410,"node_id":"PR_kwDODunzps41aIUG","number":4072,"title":"Add installation instructions to image_process doc","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1648740577000,"updated_at":1648746346000,"closed_at":1648746019000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"This PR adds the installation instructions for the Image feature to the image process doc.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4072\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4072\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4072","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4072","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4072.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4072.patch","merged_at":1648746019000},"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4071","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4071\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4071\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4071\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4071","id":1187587683,"node_id":"I_kwDODunzps5GySZj","number":4071,"title":"Loading issue for xuyeliu\/notebookCDG dataset","user":{"login":"Jun-jie-Huang","id":46160972,"node_id":"MDQ6VXNlcjQ2MTYwOTcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46160972?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Jun-jie-Huang","html_url":"https:\/\/github.com\/Jun-jie-Huang","followers_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/followers","following_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/orgs","repos_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/repos","events_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Jun-jie-Huang\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @Jun-jie-Huang,\r\n\r\nAs the error message says, \".pkl\" data files are not supported.\r\n\r\nIf you would like to share your dataset on the Hub, you would need:\r\n- either to create a Python loading script, that loads the data in any format\r\n- or to transform your data files to one of the supported formats (listed in the error message above: CSV, JSON, Parquet, TXT,...)\r\n\r\nYou can find the details in our docs: \r\n- How to share a dataset: https:\/\/huggingface.co\/docs\/datasets\/share\r\n- How to create a dataset loading script: https:\/\/huggingface.co\/docs\/datasets\/dataset_script\r\n\r\nFeel free to re-open this issue and ping us if you need further assistance."],"created_at":1648708589000,"updated_at":1648714621000,"closed_at":1648714576000,"author_association":"NONE","active_lock_reason":null,"body":"## Dataset viewer issue for '*xuyeliu\/notebookCDG*'\r\n\r\n**Link:** *[link to the dataset viewer page](https:\/\/huggingface.co\/datasets\/xuyeliu\/notebookCDG)*\r\n\r\n*Couldn't load the xuyeliu\/notebookCDG with provided scripts: *\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"xuyeliu\/notebookCDG\/dataset_notebook.pkl\")\r\n```\r\nI get an error message as follows:\r\n\r\nFileNotFoundError: Couldn't find a dataset script at \/home\/code_documentation\/code\/xuyeliu\/notebookCDG\/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu\/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu\/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n\r\n\r\n\r\nAm I the one who added this dataset ? No\r\n","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4071\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4071\/timeline","performed_via_github_app":null,"state_reason":"completed","draft":null,"pull_request":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4070","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4070\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4070\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4070\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4070","id":1186810205,"node_id":"PR_kwDODunzps41VMYq","number":4070,"title":"Create metric card for seqeval","user":{"login":"sashavor","id":14205986,"node_id":"MDQ6VXNlcjE0MjA1OTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14205986?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sashavor","html_url":"https:\/\/github.com\/sashavor","followers_url":"https:\/\/api.github.com\/users\/sashavor\/followers","following_url":"https:\/\/api.github.com\/users\/sashavor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sashavor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sashavor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sashavor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sashavor\/orgs","repos_url":"https:\/\/api.github.com\/users\/sashavor\/repos","events_url":"https:\/\/api.github.com\/users\/sashavor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sashavor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["_The documentation is not available anymore as the PR was closed or merged._"],"created_at":1648663681000,"updated_at":1648839778000,"closed_at":1648839445000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"body":"Proposing metric card for seqeval. Not sure which values to report for Popular papers though.","reactions":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4070\/reactions","total_count":0,"+1":0,"-1":0,"laugh":0,"hooray":0,"confused":0,"heart":0,"rocket":0,"eyes":0},"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4070\/timeline","performed_via_github_app":null,"state_reason":null,"draft":false,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/4070","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4070","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4070.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/4070.patch","merged_at":1648839445000},"is_pull_request":true}