url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.87B
| node_id
stringlengths 18
32
| number
int64 1
7.42k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments_nb
int64 0
70
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments
sequencelengths 0
30
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7418/comments | https://api.github.com/repos/huggingface/datasets/issues/7418/events | https://github.com/huggingface/datasets/issues/7418 | 2,868,701,471 | I_kwDODunzps6q_Okf | 7,418 | pyarrow.lib.arrowinvalid: cannot mix list and non-list, non-null values with map function | {
"login": "alexxchen",
"id": 15705569,
"node_id": "MDQ6VXNlcjE1NzA1NTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/15705569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexxchen",
"html_url": "https://github.com/alexxchen",
"followers_url": "https://api.github.com/users/alexxchen/followers",
"following_url": "https://api.github.com/users/alexxchen/following{/other_user}",
"gists_url": "https://api.github.com/users/alexxchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexxchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexxchen/subscriptions",
"organizations_url": "https://api.github.com/users/alexxchen/orgs",
"repos_url": "https://api.github.com/users/alexxchen/repos",
"events_url": "https://api.github.com/users/alexxchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexxchen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-02-21T10:58:06 | 2025-02-21T11:02:22 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
Encounter pyarrow.lib.arrowinvalid error with map function in some example when loading the dataset
### Steps to reproduce the bug
```
from datasets import load_dataset
from PIL import Image, PngImagePlugin
dataset = load_dataset("leonardPKU/GEOQA_R1V_Train_8K")
system_prompt="You are a helpful AI Assistant"
def make_conversation(example):
prompt = []
prompt.append({"role": "system", "content": system_prompt})
prompt.append(
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": example["problem"]},
]
}
)
return {"prompt": prompt}
def check_data_types(example):
for key, value in example.items():
if key == 'image':
if not isinstance(value, PngImagePlugin.PngImageFile):
print(value)
if key == "problem" or key == "solution":
if not isinstance(value, str):
print(value)
return example
dataset = dataset.map(check_data_types)
dataset = dataset.map(make_conversation)
```
### Expected behavior
Successfully process the dataset with map
### Environment info
datasets==3.3.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7418/timeline | null | null | null | false | [
"@lhoestq "
] |
https://api.github.com/repos/huggingface/datasets/issues/7417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7417/comments | https://api.github.com/repos/huggingface/datasets/issues/7417/events | https://github.com/huggingface/datasets/pull/7417 | 2,866,868,922 | PR_kwDODunzps6L78k3 | 7,417 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-20T17:45:29 | 2025-02-20T17:47:50 | 2025-02-20T17:45:36 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7417/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7417",
"html_url": "https://github.com/huggingface/datasets/pull/7417",
"diff_url": "https://github.com/huggingface/datasets/pull/7417.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7417.patch",
"merged_at": "2025-02-20T17:45:36"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7417). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7416/comments | https://api.github.com/repos/huggingface/datasets/issues/7416/events | https://github.com/huggingface/datasets/pull/7416 | 2,866,862,143 | PR_kwDODunzps6L77G2 | 7,416 | Release: 3.3.2 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-20T17:42:11 | 2025-02-20T17:44:35 | 2025-02-20T17:43:28 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7416/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7416",
"html_url": "https://github.com/huggingface/datasets/pull/7416",
"diff_url": "https://github.com/huggingface/datasets/pull/7416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7416.patch",
"merged_at": "2025-02-20T17:43:28"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7416). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7415/comments | https://api.github.com/repos/huggingface/datasets/issues/7415/events | https://github.com/huggingface/datasets/issues/7415 | 2,865,774,546 | I_kwDODunzps6q0D_S | 7,415 | Shard Dataset at specific indices | {
"login": "nikonikolov",
"id": 11044035,
"node_id": "MDQ6VXNlcjExMDQ0MDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/11044035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikonikolov",
"html_url": "https://github.com/nikonikolov",
"followers_url": "https://api.github.com/users/nikonikolov/followers",
"following_url": "https://api.github.com/users/nikonikolov/following{/other_user}",
"gists_url": "https://api.github.com/users/nikonikolov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikonikolov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikonikolov/subscriptions",
"organizations_url": "https://api.github.com/users/nikonikolov/orgs",
"repos_url": "https://api.github.com/users/nikonikolov/repos",
"events_url": "https://api.github.com/users/nikonikolov/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikonikolov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 2 | 2025-02-20T10:43:10 | 2025-02-20T15:24:53 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from_disk`, how can I load just a subset of the shards to save memory and time on different ranks?
I guess an alternative to this would be, given a loaded `Dataset`, how can I run `Dataset.shard` such that sharding doesn't split any episode across shards? | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7415/timeline | null | null | null | false | [
"Hi ! if it's an option I'd suggest to have one sequence per row instead.\n\nOtherwise you'd have to make your own save/load mechanism",
"Saving one sequence per row is very difficult and heavy and makes all the optimizations pointless. How would a custom save/load mechanism look like?"
] |
https://api.github.com/repos/huggingface/datasets/issues/7414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7414/comments | https://api.github.com/repos/huggingface/datasets/issues/7414/events | https://github.com/huggingface/datasets/pull/7414 | 2,863,798,756 | PR_kwDODunzps6LxjsH | 7,414 | Gracefully cancel async tasks | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-19T16:10:58 | 2025-02-20T14:12:26 | 2025-02-20T14:12:23 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7414/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7414",
"html_url": "https://github.com/huggingface/datasets/pull/7414",
"diff_url": "https://github.com/huggingface/datasets/pull/7414.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7414.patch",
"merged_at": "2025-02-20T14:12:23"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7414). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7413/comments | https://api.github.com/repos/huggingface/datasets/issues/7413/events | https://github.com/huggingface/datasets/issues/7413 | 2,860,947,582 | I_kwDODunzps6qhph- | 7,413 | Documentation on multiple media files of the same type with WebDataset | {
"login": "DCNemesis",
"id": 3616964,
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DCNemesis",
"html_url": "https://github.com/DCNemesis",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-02-18T16:13:20 | 2025-02-20T14:17:54 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | The [current documentation](https://huggingface.co/docs/datasets/en/video_dataset) on a creating a video dataset includes only examples with one media file and one json. It would be useful to have examples where multiple files of the same type are included. For example, in a sign language dataset, you may have a base video and a video annotation of the extracted pose. According to the WebDataset documentation, this should be able to be done with period separated filenames. For example:
```e39871fd9fd74f55.base.mp4
e39871fd9fd74f55.pose.mp4
e39871fd9fd74f55.json
f18b91585c4d3f3e.base.mp4
f18b91585c4d3f3e.pose.mp4
f18b91585c4d3f3e.json
...
```
If you can confirm that this method of including multiple media files works with huggingface datasets and include an example in the documentation, I'd appreciate it. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7413/timeline | null | null | null | false | [
"Yes this is correct and it works with huggingface datasets as well ! Feel free to include an example here: https://github.com/huggingface/datasets/blob/main/docs/source/video_dataset.mdx"
] |
https://api.github.com/repos/huggingface/datasets/issues/7412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7412/comments | https://api.github.com/repos/huggingface/datasets/issues/7412/events | https://github.com/huggingface/datasets/issues/7412 | 2,859,433,710 | I_kwDODunzps6qb37u | 7,412 | Index Error Invalid Ket is out of bounds for size 0 for code-search-net/code_search_net dataset | {
"login": "harshakhmk",
"id": 56113657,
"node_id": "MDQ6VXNlcjU2MTEzNjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/56113657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harshakhmk",
"html_url": "https://github.com/harshakhmk",
"followers_url": "https://api.github.com/users/harshakhmk/followers",
"following_url": "https://api.github.com/users/harshakhmk/following{/other_user}",
"gists_url": "https://api.github.com/users/harshakhmk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harshakhmk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harshakhmk/subscriptions",
"organizations_url": "https://api.github.com/users/harshakhmk/orgs",
"repos_url": "https://api.github.com/users/harshakhmk/repos",
"events_url": "https://api.github.com/users/harshakhmk/events{/privacy}",
"received_events_url": "https://api.github.com/users/harshakhmk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 0 | 2025-02-18T05:58:33 | 2025-02-18T06:42:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
I am trying to do model pruning on sentence-transformers/all-mini-L6-v2 for the code-search-net/code_search_net dataset using INCTrainer class
However I am getting below error
```
raise IndexError(f"Invalid Key: {key is our of bounds for size {size}")
IndexError: Invalid key: 1840208 is out of bounds for size 0
```
### Steps to reproduce the bug
Model pruning on the above dataset using the below guide
https://huggingface.co/docs/optimum/en/intel/neural_compressor/optimization#pruning
### Expected behavior
The modsl should be successfully pruned
### Environment info
Torch version: 2.4.1
Python version: 3.8.10 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7412/timeline | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/7411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7411/comments | https://api.github.com/repos/huggingface/datasets/issues/7411/events | https://github.com/huggingface/datasets/pull/7411 | 2,858,993,390 | PR_kwDODunzps6LhV0Z | 7,411 | Attempt to fix multiprocessing hang by closing and joining the pool before termination | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 3 | 2025-02-17T23:58:03 | 2025-02-19T21:11:24 | 2025-02-19T13:40:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | https://github.com/huggingface/datasets/issues/6393 has plagued me on and off for a very long time. I have had various workarounds (one time combining two filter calls into one filter call removed the issue, another time making rank 0 go first resolved a cache race condition, one time i think upgrading the version of something resolved it). I don't know hf datasets well enough to fully understand the root cause, but I _think_ this PR fixes it.
Evidence: I have an LLM Foundry training yaml/script (datasets version 3.2.0) that results in a hang ~1/10 times (for a baseline for this testing, it was 2/36 runs that hung). I also reran with the latest datasets version (3.3.1) and got 4/36 hung. Installing datasets from this PR, I was able to successful run the script 144 times without a hang occurring. Assuming the base probability is 1/10, this should be more than enough times to have confidence it works.
After adding some logging, I could see that the code hung during the __exit__ of the mp pool context manager, after all shards had been processed, and the tqdm context manager had exited.
My best explanation: When multiprocessing pool __exit__ is called, it calls pool.terminate, which forcefully exits all the processes (and calls code related to this that I haven't looked at closely). I'm guessing this forceful termination has a bad interaction with some multithreading/multiprocessing that hf datasets does. If we instead call pool.close and pool.join before the pool.terminate happens, perhaps whatever that bad interaction is is able to complete gracefully, and then terminate call proceeds without issue.
If this PR seems good to you, I'd be very appreciative if you were able to do a patch release including it. Thank you!
@lhoestq | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7411/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7411",
"html_url": "https://github.com/huggingface/datasets/pull/7411",
"diff_url": "https://github.com/huggingface/datasets/pull/7411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7411.patch",
"merged_at": "2025-02-19T13:40:32"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7411). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for the fix! We have been affected by this a lot when we try to use LLM Foundry with larger multimodal ICL datasets. ",
"@lorabit110 are you able to test it out for your case as well? Would be great to get a second validation that it actually fixes the issue. Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/7410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7410/comments | https://api.github.com/repos/huggingface/datasets/issues/7410/events | https://github.com/huggingface/datasets/pull/7410 | 2,858,085,707 | PR_kwDODunzps6LeQBF | 7,410 | Set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-17T14:54:39 | 2025-02-17T14:56:58 | 2025-02-17T14:54:56 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7410/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7410",
"html_url": "https://github.com/huggingface/datasets/pull/7410",
"diff_url": "https://github.com/huggingface/datasets/pull/7410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7410.patch",
"merged_at": "2025-02-17T14:54:56"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7410). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7409/comments | https://api.github.com/repos/huggingface/datasets/issues/7409/events | https://github.com/huggingface/datasets/pull/7409 | 2,858,079,508 | PR_kwDODunzps6LeOpY | 7,409 | Release: 3.3.1 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-17T14:52:12 | 2025-02-17T14:54:32 | 2025-02-17T14:53:13 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7409",
"html_url": "https://github.com/huggingface/datasets/pull/7409",
"diff_url": "https://github.com/huggingface/datasets/pull/7409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7409.patch",
"merged_at": "2025-02-17T14:53:13"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7409). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7408/comments | https://api.github.com/repos/huggingface/datasets/issues/7408/events | https://github.com/huggingface/datasets/pull/7408 | 2,858,012,313 | PR_kwDODunzps6Ld_-m | 7,408 | Fix filter speed regression | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-17T14:25:32 | 2025-02-17T14:28:48 | 2025-02-17T14:28:46 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | close https://github.com/huggingface/datasets/issues/7404 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7408/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7408",
"html_url": "https://github.com/huggingface/datasets/pull/7408",
"diff_url": "https://github.com/huggingface/datasets/pull/7408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7408.patch",
"merged_at": "2025-02-17T14:28:46"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7408). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7407/comments | https://api.github.com/repos/huggingface/datasets/issues/7407/events | https://github.com/huggingface/datasets/pull/7407 | 2,856,517,442 | PR_kwDODunzps6LY7y5 | 7,407 | Update use_with_pandas.mdx: to_pandas() correction in last section | {
"login": "ibarrien",
"id": 7552335,
"node_id": "MDQ6VXNlcjc1NTIzMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7552335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibarrien",
"html_url": "https://github.com/ibarrien",
"followers_url": "https://api.github.com/users/ibarrien/followers",
"following_url": "https://api.github.com/users/ibarrien/following{/other_user}",
"gists_url": "https://api.github.com/users/ibarrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibarrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibarrien/subscriptions",
"organizations_url": "https://api.github.com/users/ibarrien/orgs",
"repos_url": "https://api.github.com/users/ibarrien/repos",
"events_url": "https://api.github.com/users/ibarrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibarrien/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 0 | 2025-02-17T01:53:31 | 2025-02-20T17:28:04 | 2025-02-20T17:28:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | last section ``to_pandas()" | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7407",
"html_url": "https://github.com/huggingface/datasets/pull/7407",
"diff_url": "https://github.com/huggingface/datasets/pull/7407.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7407.patch",
"merged_at": "2025-02-20T17:28:04"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/7406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7406/comments | https://api.github.com/repos/huggingface/datasets/issues/7406/events | https://github.com/huggingface/datasets/issues/7406 | 2,856,441,206 | I_kwDODunzps6qQdV2 | 7,406 | Adding Core Maintainer List to CONTRIBUTING.md | {
"login": "jp1924",
"id": 93233241,
"node_id": "U_kgDOBY6gWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jp1924",
"html_url": "https://github.com/jp1924",
"followers_url": "https://api.github.com/users/jp1924/followers",
"following_url": "https://api.github.com/users/jp1924/following{/other_user}",
"gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jp1924/subscriptions",
"organizations_url": "https://api.github.com/users/jp1924/orgs",
"repos_url": "https://api.github.com/users/jp1924/repos",
"events_url": "https://api.github.com/users/jp1924/events{/privacy}",
"received_events_url": "https://api.github.com/users/jp1924/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | 3 | 2025-02-17T00:32:40 | 2025-02-19T01:28:38 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
I propose adding a core maintainer list to the `CONTRIBUTING.md` file.
### Motivation
The Transformers and Liger-Kernel projects maintain lists of core maintainers for each module.
However, the Datasets project doesn't have such a list.
### Your contribution
I have nothing to add here. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7406/timeline | null | null | null | false | [
"@lhoestq",
"there is no per-module maintainer and the list is me alone nowadays ^^'",
"@lhoestq \nOh... I feel for you. \nWhat are your criteria for choosing a core maintainer? \nIt seems like it's too much work for you to manage all this code by yourself.\n\nAlso, if you don't mind, can you check this PR for me?\n#7368 I'd like this to be added as soon as possible because I need it."
] |
https://api.github.com/repos/huggingface/datasets/issues/7405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7405/comments | https://api.github.com/repos/huggingface/datasets/issues/7405/events | https://github.com/huggingface/datasets/issues/7405 | 2,856,372,814 | I_kwDODunzps6qQMpO | 7,405 | Lazy loading of environment variables | {
"login": "nikvaessen",
"id": 7225987,
"node_id": "MDQ6VXNlcjcyMjU5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7225987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikvaessen",
"html_url": "https://github.com/nikvaessen",
"followers_url": "https://api.github.com/users/nikvaessen/followers",
"following_url": "https://api.github.com/users/nikvaessen/following{/other_user}",
"gists_url": "https://api.github.com/users/nikvaessen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikvaessen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikvaessen/subscriptions",
"organizations_url": "https://api.github.com/users/nikvaessen/orgs",
"repos_url": "https://api.github.com/users/nikvaessen/repos",
"events_url": "https://api.github.com/users/nikvaessen/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikvaessen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-02-16T22:31:41 | 2025-02-17T15:17:18 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
Loading a `.env` file after an `import datasets` call does not correctly use the environment variables.
This is due the fact that environment variables are read at import time:
https://github.com/huggingface/datasets/blob/de062f0552a810c52077543c1169c38c1f0c53fc/src/datasets/config.py#L155C1-L155C80
### Steps to reproduce the bug
```bash
# make tmp dir
mkdir -p /tmp/debug-env
# make .env file
echo HF_HOME=/tmp/debug-env/data > /tmp/debug-env/.env
# first load dotenv, downloads to /tmp/debug-env/data
uv run --with datasets,python-dotenv python3 -c \
'import dotenv; dotenv.load_dotenv("/tmp/debug-env/.env"); import datasets; datasets.load_dataset("Anthropic/hh-rlhf")'
# first import datasets, downloads to `~/.cache/huggingface`
uv run --with datasets,python-dotenv python3 -c \
'import datasets; import dotenv; dotenv.load_dotenv("/tmp/debug-env/.env"); datasets.load_dataset("Anthropic/hh-rlhf")'
```
### Expected behavior
I expect that setting environment variables with something like this:
```python3
if __name__ == "__main__":
load_dotenv()
main()
```
works correctly.
### Environment info
"datasets>=3.3.0",
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7405/timeline | null | null | null | false | [
"Many python packages out there, including `huggingface_hub`, do load the environment variables on import.\nYou should `load_dotenv()` before importing the libraries.\n\nFor example you can move all you imports inside your `main()` function"
] |
https://api.github.com/repos/huggingface/datasets/issues/7404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7404/comments | https://api.github.com/repos/huggingface/datasets/issues/7404/events | https://github.com/huggingface/datasets/issues/7404 | 2,856,366,207 | I_kwDODunzps6qQLB_ | 7,404 | Performance regression in `dataset.filter` | {
"login": "ttim",
"id": 82200,
"node_id": "MDQ6VXNlcjgyMjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/82200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttim",
"html_url": "https://github.com/ttim",
"followers_url": "https://api.github.com/users/ttim/followers",
"following_url": "https://api.github.com/users/ttim/following{/other_user}",
"gists_url": "https://api.github.com/users/ttim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttim/subscriptions",
"organizations_url": "https://api.github.com/users/ttim/orgs",
"repos_url": "https://api.github.com/users/ttim/repos",
"events_url": "https://api.github.com/users/ttim/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | 3 | 2025-02-16T22:19:14 | 2025-02-17T17:46:06 | 2025-02-17T14:28:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
We're filtering dataset of ~1M (small-ish) records. At some point in the code we do `dataset.filter`, before (including 3.2.0) it was taking couple of seconds, and now it takes 4 hours.
We use 16 threads/workers, and stack trace at them look as follows:
```
Traceback (most recent call last):
File "/python/lib/python3.12/site-packages/multiprocess/process.py", line 314, in _bootstrap
self.run()
File "/python/lib/python3.12/site-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/python/lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "/python/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3511, in _map_single
for i, batch in iter_outputs(shard_iterable):
File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3461, in iter_outputs
yield i, apply_function(example, i, offset=offset)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3390, in apply_function
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 6416, in get_indices_from_mask_function
indices_array = indices_mapping.column(0).take(indices_array)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 1079, in pyarrow.lib.ChunkedArray.take
File "/python/lib/python3.12/site-packages/pyarrow/compute.py", line 458, in take
def take(data, indices, *, boundscheck=True, memory_pool=None):
```
### Steps to reproduce the bug
1. Save dataset of 1M records in arrow
2. Filter it with 16 threads
3. Watch it take too long
### Expected behavior
Filtering done fast
### Environment info
datasets 3.3.0, python 3.12 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7404/timeline | completed | null | null | false | [
"Thanks for reporting, I'll fix the regression today",
"I just released `datasets` 3.3.1 with a fix, let me know if it's good now :)",
"@lhoestq it fixed the issue.\n\nThis was (very) fast, thank you very much!"
] |
https://api.github.com/repos/huggingface/datasets/issues/7402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7402/comments | https://api.github.com/repos/huggingface/datasets/issues/7402/events | https://github.com/huggingface/datasets/pull/7402 | 2,855,880,858 | PR_kwDODunzps6LW8G3 | 7,402 | Fix a typo in arrow_dataset.py | {
"login": "jingedawang",
"id": 7996256,
"node_id": "MDQ6VXNlcjc5OTYyNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7996256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jingedawang",
"html_url": "https://github.com/jingedawang",
"followers_url": "https://api.github.com/users/jingedawang/followers",
"following_url": "https://api.github.com/users/jingedawang/following{/other_user}",
"gists_url": "https://api.github.com/users/jingedawang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jingedawang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jingedawang/subscriptions",
"organizations_url": "https://api.github.com/users/jingedawang/orgs",
"repos_url": "https://api.github.com/users/jingedawang/repos",
"events_url": "https://api.github.com/users/jingedawang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jingedawang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 0 | 2025-02-16T04:52:02 | 2025-02-20T17:29:28 | 2025-02-20T17:29:28 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | "in the feature" should be "in the future" | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7402/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7402",
"html_url": "https://github.com/huggingface/datasets/pull/7402",
"diff_url": "https://github.com/huggingface/datasets/pull/7402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7402.patch",
"merged_at": "2025-02-20T17:29:28"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/7401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7401/comments | https://api.github.com/repos/huggingface/datasets/issues/7401/events | https://github.com/huggingface/datasets/pull/7401 | 2,853,260,869 | PR_kwDODunzps6LOMSo | 7,401 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-14T10:17:03 | 2025-02-14T10:19:20 | 2025-02-14T10:17:13 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7401/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7401",
"html_url": "https://github.com/huggingface/datasets/pull/7401",
"diff_url": "https://github.com/huggingface/datasets/pull/7401.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7401.patch",
"merged_at": "2025-02-14T10:17:13"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7401). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7399/comments | https://api.github.com/repos/huggingface/datasets/issues/7399/events | https://github.com/huggingface/datasets/issues/7399 | 2,853,098,442 | I_kwDODunzps6qDtPK | 7,399 | Synchronize parameters for various datasets | {
"login": "grofte",
"id": 7976840,
"node_id": "MDQ6VXNlcjc5NzY4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7976840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grofte",
"html_url": "https://github.com/grofte",
"followers_url": "https://api.github.com/users/grofte/followers",
"following_url": "https://api.github.com/users/grofte/following{/other_user}",
"gists_url": "https://api.github.com/users/grofte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grofte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grofte/subscriptions",
"organizations_url": "https://api.github.com/users/grofte/orgs",
"repos_url": "https://api.github.com/users/grofte/repos",
"events_url": "https://api.github.com/users/grofte/events{/privacy}",
"received_events_url": "https://api.github.com/users/grofte/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 2 | 2025-02-14T09:15:11 | 2025-02-19T11:50:29 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
[IterableDatasetDict](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.IterableDatasetDict.map) map function is missing the `desc` parameter. You can see the equivalent map function for [Dataset here](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.Dataset.map).
There might be other parameters missing - I haven't checked.
### Steps to reproduce the bug
from datasets import Dataset, IterableDataset, IterableDatasetDict
ds = IterableDatasetDict({"train": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3),
"validate": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3)})
for d in ds["train"]:
print(d)
ds = ds.map(lambda x: {k: v+1 for k, v in x.items()}, desc="increment")
for d in ds["train"]:
print(d)
### Expected behavior
The description parameter should be available for all datasets (or none).
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.28.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.9.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7399/timeline | null | null | null | false | [
"Hi ! the `desc` parameter is only available for Dataset / DatasetDict for the progress bar of `map()``\n\nSince IterableDataset only runs the map functions when you iterate over the dataset, there is no progress bar and `desc` is useless. We could still add the argument for parity but it wouldn't be used for anything",
"I think you should add it. It doesn't hurt. The reason I ran into it was because I re-wrote a pipeline to use either a stream or a fully loaded dataset. Of course I can simply remove it but it is nice to have on the memory loaded dataset. "
] |
https://api.github.com/repos/huggingface/datasets/issues/7398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7398/comments | https://api.github.com/repos/huggingface/datasets/issues/7398/events | https://github.com/huggingface/datasets/pull/7398 | 2,853,097,869 | PR_kwDODunzps6LNoDk | 7,398 | Release: 3.3.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-14T09:15:03 | 2025-02-14T09:57:39 | 2025-02-14T09:57:37 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7398/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7398",
"html_url": "https://github.com/huggingface/datasets/pull/7398",
"diff_url": "https://github.com/huggingface/datasets/pull/7398.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7398.patch",
"merged_at": "2025-02-14T09:57:37"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7398). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7397/comments | https://api.github.com/repos/huggingface/datasets/issues/7397/events | https://github.com/huggingface/datasets/pull/7397 | 2,852,829,763 | PR_kwDODunzps6LMuQD | 7,397 | Kannada dataset(Conversations, Wikipedia etc) | {
"login": "Likhith2612",
"id": 146451281,
"node_id": "U_kgDOCLqrUQ",
"avatar_url": "https://avatars.githubusercontent.com/u/146451281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Likhith2612",
"html_url": "https://github.com/Likhith2612",
"followers_url": "https://api.github.com/users/Likhith2612/followers",
"following_url": "https://api.github.com/users/Likhith2612/following{/other_user}",
"gists_url": "https://api.github.com/users/Likhith2612/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Likhith2612/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Likhith2612/subscriptions",
"organizations_url": "https://api.github.com/users/Likhith2612/orgs",
"repos_url": "https://api.github.com/users/Likhith2612/repos",
"events_url": "https://api.github.com/users/Likhith2612/events{/privacy}",
"received_events_url": "https://api.github.com/users/Likhith2612/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-14T06:53:03 | 2025-02-20T17:28:54 | 2025-02-20T17:28:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7397/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7397",
"html_url": "https://github.com/huggingface/datasets/pull/7397",
"diff_url": "https://github.com/huggingface/datasets/pull/7397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7397.patch",
"merged_at": null
} | true | [
"Hi ! feel free to uplad the CSV on https://huggingface.co/datasets :)\r\n\r\nwe don't store the datasets' data in this github repository"
] |
https://api.github.com/repos/huggingface/datasets/issues/7400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7400/comments | https://api.github.com/repos/huggingface/datasets/issues/7400/events | https://github.com/huggingface/datasets/issues/7400 | 2,853,201,277 | I_kwDODunzps6qEGV9 | 7,400 | 504 Gateway Timeout when uploading large dataset to Hugging Face Hub | {
"login": "hotchpotch",
"id": 3500,
"node_id": "MDQ6VXNlcjM1MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hotchpotch",
"html_url": "https://github.com/hotchpotch",
"followers_url": "https://api.github.com/users/hotchpotch/followers",
"following_url": "https://api.github.com/users/hotchpotch/following{/other_user}",
"gists_url": "https://api.github.com/users/hotchpotch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hotchpotch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hotchpotch/subscriptions",
"organizations_url": "https://api.github.com/users/hotchpotch/orgs",
"repos_url": "https://api.github.com/users/hotchpotch/repos",
"events_url": "https://api.github.com/users/hotchpotch/events{/privacy}",
"received_events_url": "https://api.github.com/users/hotchpotch/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 4 | 2025-02-14T02:18:35 | 2025-02-14T23:48:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Description
I encountered consistent 504 Gateway Timeout errors while attempting to upload a large dataset (approximately 500GB) to the Hugging Face Hub. The upload fails during the process with a Gateway Timeout error.
I will continue trying to upload. While it might succeed in future attempts, I wanted to report this issue in the meantime.
### Reproduction
- I attempted the upload 3 times
- Each attempt resulted in the same 504 error during the upload process (not at the start, but in the middle of the upload)
- Using `dataset.push_to_hub()` method
### Environment Information
```
- huggingface_hub version: 0.28.0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
- Python version: 3.11.10
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Running in Google Colab Enterprise ?: No
- Token path ?: /home/hotchpotch/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: hotchpotch
- Configured git credential helpers: store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.5.1
- Jinja2: 3.1.5
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: 10.4.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.26.4
- pydantic: 2.10.6
- aiohttp: 3.11.11
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/hotchpotch/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/hotchpotch/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/hotchpotch/.cache/huggingface/token
- HF_STORED_TOKENS_PATH: /home/hotchpotch/.cache/huggingface/stored_tokens
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
### Full Error Traceback
```python
Traceback (most recent call last):
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/hotchpotch/fineweb-2-edu-japanese.git/info/lfs/objects/batch
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/create_edu_japanese_ds/upload_edu_japanese_ds.py", line 12, in <module>
ds.push_to_hub("hotchpotch/fineweb-2-edu-japanese", private=True)
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/datasets/dataset_dict.py", line 1665, in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 5301, in _push_parquet_shards_to_hub
api.preupload_lfs_files(
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4215, in preupload_lfs_files
_upload_lfs_files(
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/_commit_api.py", line 395, in _upload_lfs_files
batch_actions_chunk, batch_errors_chunk = post_lfs_batch_info(
^^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/lfs.py", line 168, in post_lfs_batch_info
hf_raise_for_status(resp)
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/hotchpotch/fineweb-2-edu-japanese.git/info/lfs/objects/batch
```
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7400/timeline | null | null | null | false | [
"I transferred to the `datasets` repository. Is there any retry mechanism in `datasets` @lhoestq ?\n\nAnother solution @hotchpotch if you want to get your dataset pushed to the Hub in a robust way is to save it to a local folder first and then use `huggingface-cli upload-large-folder` (see https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-large-folder). It has better retry mechanism in case of failure.",
"There is no retry mechanism for `api.preupload_lfs_files` in `push_to_hub()` but we can definitely add one here\n\nhttps://github.com/huggingface/datasets/blob/de062f0552a810c52077543c1169c38c1f0c53fc/src/datasets/arrow_dataset.py#L5372",
"@Wauplin \n\nThank you! I believe that to use load_dataset() to read data from Hugging Face, we need to first save the markdown metadata and parquet files in our local filesystem, then upload them using upload-large-folder. If you know how to do this, could you please let me know?\n\n",
"@lhoestq \n\nI see, so adding a retry mechanism there would solve it. If I continue to have issues, I'll consider implementing that kind of solution."
] |
https://api.github.com/repos/huggingface/datasets/issues/7396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7396/comments | https://api.github.com/repos/huggingface/datasets/issues/7396/events | https://github.com/huggingface/datasets/pull/7396 | 2,851,716,755 | PR_kwDODunzps6LJBmT | 7,396 | Update README.md | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-13T17:44:36 | 2025-02-13T17:46:57 | 2025-02-13T17:44:51 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7396/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7396",
"html_url": "https://github.com/huggingface/datasets/pull/7396",
"diff_url": "https://github.com/huggingface/datasets/pull/7396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7396.patch",
"merged_at": "2025-02-13T17:44:51"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7396). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7395/comments | https://api.github.com/repos/huggingface/datasets/issues/7395/events | https://github.com/huggingface/datasets/pull/7395 | 2,851,575,160 | PR_kwDODunzps6LIivQ | 7,395 | Update docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-13T16:43:15 | 2025-02-13T17:20:32 | 2025-02-13T17:20:30 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | - update min python version
- replace canonical dataset names with new names
- avoid examples with trust_remote_code | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7395/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7395",
"html_url": "https://github.com/huggingface/datasets/pull/7395",
"diff_url": "https://github.com/huggingface/datasets/pull/7395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7395.patch",
"merged_at": "2025-02-13T17:20:29"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7395). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7394/comments | https://api.github.com/repos/huggingface/datasets/issues/7394/events | https://github.com/huggingface/datasets/issues/7394 | 2,847,172,115 | I_kwDODunzps6ptGYT | 7,394 | Using load_dataset with data_files and split arguments yields an error | {
"login": "devon-research",
"id": 61103399,
"node_id": "MDQ6VXNlcjYxMTAzMzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/61103399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devon-research",
"html_url": "https://github.com/devon-research",
"followers_url": "https://api.github.com/users/devon-research/followers",
"following_url": "https://api.github.com/users/devon-research/following{/other_user}",
"gists_url": "https://api.github.com/users/devon-research/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devon-research/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devon-research/subscriptions",
"organizations_url": "https://api.github.com/users/devon-research/orgs",
"repos_url": "https://api.github.com/users/devon-research/repos",
"events_url": "https://api.github.com/users/devon-research/events{/privacy}",
"received_events_url": "https://api.github.com/users/devon-research/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 0 | 2025-02-12T04:50:11 | 2025-02-12T04:50:11 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
It seems the list of valid splits recorded by the package becomes incorrectly overwritten when using the `data_files` argument.
If I run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl")
```
then I get the error
```
ValueError: Unknown split "all_examples". Should be one of ['train'].
```
However, if I run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="train", name="Expert")
```
then I get
```
ValueError: Unknown split "train". Should be one of ['all_examples'].
```
### Steps to reproduce the bug
Run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl")
```
### Expected behavior
No error.
### Environment info
Python = 3.12
datasets = 3.2.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7394/timeline | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/7393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7393/comments | https://api.github.com/repos/huggingface/datasets/issues/7393/events | https://github.com/huggingface/datasets/pull/7393 | 2,846,446,674 | PR_kwDODunzps6K3DiZ | 7,393 | Optimized sequence encoding for scalars | {
"login": "lukasgd",
"id": 38319063,
"node_id": "MDQ6VXNlcjM4MzE5MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/38319063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukasgd",
"html_url": "https://github.com/lukasgd",
"followers_url": "https://api.github.com/users/lukasgd/followers",
"following_url": "https://api.github.com/users/lukasgd/following{/other_user}",
"gists_url": "https://api.github.com/users/lukasgd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukasgd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukasgd/subscriptions",
"organizations_url": "https://api.github.com/users/lukasgd/orgs",
"repos_url": "https://api.github.com/users/lukasgd/repos",
"events_url": "https://api.github.com/users/lukasgd/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukasgd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-02-11T20:30:44 | 2025-02-13T17:11:33 | 2025-02-13T17:11:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | The change in https://github.com/huggingface/datasets/pull/3197 introduced redundant list-comprehensions when `obj` is a long sequence of scalars. This becomes a noticeable overhead when loading data from an `IterableDataset` in the function `_apply_feature_types_on_example` and can be eliminated by adding a check for scalars in `encode_nested_example` proposed here.
In the following code example
```
import time
from datasets.features import Sequence, Value
from datasets.features.features import encode_nested_example
schema = Sequence(Value("int32"))
obj = list(range(100000))
start = time.perf_counter()
result = encode_nested_example(schema, obj)
stop = time.perf_counter()
print(f"Time spent is {stop-start} sec")
```
`encode_nested_example` becomes 492x faster (from 0.0769 to 0.0002 sec), respectively 322x (from 0.00814 to 0.00003 sec) for a list of length 10000, on a GH200 system, making it unnoticeable when loading data with tokenization.
Another change is made to avoid creating arrays from scalars and afterwards re-extracting them during casting to python (`obj == obj.__array__()[()]` in that case), which avoids a regression in the array write benchmarks. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7393/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7393",
"html_url": "https://github.com/huggingface/datasets/pull/7393",
"diff_url": "https://github.com/huggingface/datasets/pull/7393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7393.patch",
"merged_at": "2025-02-13T17:11:32"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7393). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7392/comments | https://api.github.com/repos/huggingface/datasets/issues/7392/events | https://github.com/huggingface/datasets/issues/7392 | 2,846,095,043 | I_kwDODunzps6po_bD | 7,392 | push_to_hub payload too large error when using large ClassLabel feature | {
"login": "DavidRConnell",
"id": 35470740,
"node_id": "MDQ6VXNlcjM1NDcwNzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/35470740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidRConnell",
"html_url": "https://github.com/DavidRConnell",
"followers_url": "https://api.github.com/users/DavidRConnell/followers",
"following_url": "https://api.github.com/users/DavidRConnell/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidRConnell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidRConnell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidRConnell/subscriptions",
"organizations_url": "https://api.github.com/users/DavidRConnell/orgs",
"repos_url": "https://api.github.com/users/DavidRConnell/repos",
"events_url": "https://api.github.com/users/DavidRConnell/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidRConnell/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-02-11T17:51:34 | 2025-02-11T18:01:31 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
When using `datasets.DatasetDict.push_to_hub` an `HfHubHTTPError: 413 Client Error: Payload Too Large for url` is raised if the dataset contains a large `ClassLabel` feature. Even if the total size of the dataset is small.
### Steps to reproduce the bug
``` python
import random
import sys
import datasets
random.seed(42)
def random_str(sz):
return "".join(chr(random.randint(ord("a"), ord("z"))) for _ in range(sz))
data = datasets.DatasetDict(
{
str(i): datasets.Dataset.from_dict(
{
"label": [list(range(3)) for _ in range(10)],
"abstract": [random_str(10_000) for _ in range(10)],
},
)
for i in range(3)
}
)
features = data["1"].features.copy()
features["label"] = datasets.Sequence(
datasets.ClassLabel(names=[str(i) for i in range(50_000)])
)
data = data.map(lambda examples: {}, features=features)
feat_size = sys.getsizeof(data["1"].features["label"].feature.names)
print(f"Size of ClassLabel names: {feat_size}")
# Size of ClassLabel names: 444376
data.push_to_hub("dconnell/pubtator3_test")
```
Note that this succeeds if `ClassLabel` has fewer names or if `ClassLabel` is replaced with `Value("int64")`
### Expected behavior
Should push the dataset to hub.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 3.2.0
- Platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35
- Python version: 3.12.8
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7392/timeline | null | null | null | false | [
"See also <https://discuss.huggingface.co/t/datasetdict-push-to-hub-failing-with-payload-to-large/140083/8>\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/7391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7391/comments | https://api.github.com/repos/huggingface/datasets/issues/7391/events | https://github.com/huggingface/datasets/issues/7391 | 2,845,184,764 | I_kwDODunzps6plhL8 | 7,391 | AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' | {
"login": "LinXin04",
"id": 25193686,
"node_id": "MDQ6VXNlcjI1MTkzNjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/25193686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LinXin04",
"html_url": "https://github.com/LinXin04",
"followers_url": "https://api.github.com/users/LinXin04/followers",
"following_url": "https://api.github.com/users/LinXin04/following{/other_user}",
"gists_url": "https://api.github.com/users/LinXin04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LinXin04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LinXin04/subscriptions",
"organizations_url": "https://api.github.com/users/LinXin04/orgs",
"repos_url": "https://api.github.com/users/LinXin04/repos",
"events_url": "https://api.github.com/users/LinXin04/events{/privacy}",
"received_events_url": "https://api.github.com/users/LinXin04/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 0 | 2025-02-11T12:02:26 | 2025-02-11T12:02:26 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | pyarrow 尝试了若干个版本都不可以 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7391/timeline | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/7390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7390/comments | https://api.github.com/repos/huggingface/datasets/issues/7390/events | https://github.com/huggingface/datasets/issues/7390 | 2,843,813,365 | I_kwDODunzps6pgSX1 | 7,390 | Re-add py.typed | {
"login": "NeilGirdhar",
"id": 730137,
"node_id": "MDQ6VXNlcjczMDEzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NeilGirdhar",
"html_url": "https://github.com/NeilGirdhar",
"followers_url": "https://api.github.com/users/NeilGirdhar/followers",
"following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}",
"gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions",
"organizations_url": "https://api.github.com/users/NeilGirdhar/orgs",
"repos_url": "https://api.github.com/users/NeilGirdhar/repos",
"events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/NeilGirdhar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | 0 | 2025-02-10T22:12:52 | 2025-02-10T22:12:52 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here?
### Motivation
MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be good to be PEP 561 compliant as long as it's not too onerous.
### Your contribution
I can re-add py.typed, but I don't know how to make sur all of the `__all__` files are provided (although you may not need to with modern PyRight). | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7390/timeline | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/7389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7389/comments | https://api.github.com/repos/huggingface/datasets/issues/7389/events | https://github.com/huggingface/datasets/issues/7389 | 2,843,592,606 | I_kwDODunzps6pfcee | 7,389 | Getting statistics about filtered examples | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 2 | 2025-02-10T20:48:29 | 2025-02-11T20:44:15 | 2025-02-11T20:44:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | @lhoestq wondering if the team has thought about this and if there are any recommendations?
Currently when processing datasets some examples are bound to get filtered out, whether it's due to bad format, or length is too long, or any other custom filters that might be getting applied. Let's just focus on the filter by length for now, since that would be something that gets applied dynamically for each training run. Say we want to show a graph in W&B with the running total of the number of filtered examples so far.
What would be a good way to go about hooking this up? Because the map/filter operations happen before the DataLoader batches are created, at training time if we're just grabbing batches from the DataLoader then we won't know how many things have been filtered already. But there's not really a good way to include a 'num_filtered' key into the dataset itself either because dataset map/filter process examples independently and don't have a way to track a running sum.
The only approach I can kind of think of is having a 'is_filtered' key in the dataset, and then creating a custom batcher/collator that reads that and tracks the metric? | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7389/timeline | completed | null | null | false | [
"You can actually track a running sum in map() or filter() :)\n\n```python\nnum_filtered = 0\n\ndef f(x):\n global num_filtered\n condition = len(x[\"text\"]) < 1000\n if not condition:\n num_filtered += 1\n return condition\n\nds = ds.filter(f)\nprint(num_filtered)\n```\n\nand if you want to use multiprocessing, make sure to use a variable that is shared across processes\n\n\n```python\nfrom multiprocess import Manager\n\nmanager = Manager()\nnum_filtered = manager.Value('i', 0)\n\ndef f(x):\n global num_filtered\n condition = len(x[\"text\"]) < 1000\n if not condition:\n num_filtered.value += 1\n return condition\n\nds = ds.filter(f, num_proc=4)\nprint(num_filtered.value)\n```\n\nPS: `datasets` uses `multiprocess` instead of the `multiprocessing` package to support lambda functions in map() and filter()",
"Oh that's great to know!\n\nI guess this value would not be exactly synced with the batch in cases of pre-fetch and shuffle buffers and so on, but that's probably fine. Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/7388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7388/comments | https://api.github.com/repos/huggingface/datasets/issues/7388/events | https://github.com/huggingface/datasets/issues/7388 | 2,843,188,499 | I_kwDODunzps6pd50T | 7,388 | OSError: [Errno 22] Invalid argument forbidden character | {
"login": "langflogit",
"id": 124634542,
"node_id": "U_kgDOB23Frg",
"avatar_url": "https://avatars.githubusercontent.com/u/124634542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langflogit",
"html_url": "https://github.com/langflogit",
"followers_url": "https://api.github.com/users/langflogit/followers",
"following_url": "https://api.github.com/users/langflogit/following{/other_user}",
"gists_url": "https://api.github.com/users/langflogit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langflogit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langflogit/subscriptions",
"organizations_url": "https://api.github.com/users/langflogit/orgs",
"repos_url": "https://api.github.com/users/langflogit/repos",
"events_url": "https://api.github.com/users/langflogit/events{/privacy}",
"received_events_url": "https://api.github.com/users/langflogit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 2 | 2025-02-10T17:46:31 | 2025-02-11T13:42:32 | 2025-02-11T13:42:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
I'm on Windows and i'm trying to load a datasets but i'm having title error because files in the repository are named with charactere like < >which can't be in a name file. Could it be possible to load this datasets but removing those charactere ?
### Steps to reproduce the bug
load_dataset("CATMuS/medieval") on Windows
### Expected behavior
Making the function to erase the forbidden character to allow loading the datasets who have those characters.
### Environment info
- `datasets` version: 3.2.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.12.2
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | {
"login": "langflogit",
"id": 124634542,
"node_id": "U_kgDOB23Frg",
"avatar_url": "https://avatars.githubusercontent.com/u/124634542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langflogit",
"html_url": "https://github.com/langflogit",
"followers_url": "https://api.github.com/users/langflogit/followers",
"following_url": "https://api.github.com/users/langflogit/following{/other_user}",
"gists_url": "https://api.github.com/users/langflogit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langflogit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langflogit/subscriptions",
"organizations_url": "https://api.github.com/users/langflogit/orgs",
"repos_url": "https://api.github.com/users/langflogit/repos",
"events_url": "https://api.github.com/users/langflogit/events{/privacy}",
"received_events_url": "https://api.github.com/users/langflogit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7388/timeline | completed | null | null | false | [
"You can probably copy the dataset in your HF account and rename the files (without having to download them to your disk). Or alternatively feel free to open a Pull Request to this dataset with the renamed file",
"Thank you, that will help me work around this problem"
] |
https://api.github.com/repos/huggingface/datasets/issues/7387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7387/comments | https://api.github.com/repos/huggingface/datasets/issues/7387/events | https://github.com/huggingface/datasets/issues/7387 | 2,841,228,048 | I_kwDODunzps6pWbMQ | 7,387 | Dynamic adjusting dataloader sampling weight | {
"login": "whc688",
"id": 72799643,
"node_id": "MDQ6VXNlcjcyNzk5NjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/72799643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whc688",
"html_url": "https://github.com/whc688",
"followers_url": "https://api.github.com/users/whc688/followers",
"following_url": "https://api.github.com/users/whc688/following{/other_user}",
"gists_url": "https://api.github.com/users/whc688/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whc688/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whc688/subscriptions",
"organizations_url": "https://api.github.com/users/whc688/orgs",
"repos_url": "https://api.github.com/users/whc688/repos",
"events_url": "https://api.github.com/users/whc688/events{/privacy}",
"received_events_url": "https://api.github.com/users/whc688/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 3 | 2025-02-10T03:18:47 | 2025-02-11T13:24:05 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | Hi,
Thanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7387/timeline | null | null | null | false | [
"You mean based on a condition that has to be checked on-the-fly during training ? Otherwise if you know in advance after how many samples you need to change the sampling you can simply concatenate the two mixes",
"Yes, like during training, if one data sample's prediction is consistently wrong, its sampling weight gets higher and higher, and if one data sample's prediction is already correct, then we rarely sample it",
"it's not possible to use `interleave_datasets()` and modify the probabilities while iterating on the dataset at the moment, so you'd have to implement your own `IterableDataset` to implement this logic"
] |
https://api.github.com/repos/huggingface/datasets/issues/7386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7386/comments | https://api.github.com/repos/huggingface/datasets/issues/7386/events | https://github.com/huggingface/datasets/issues/7386 | 2,840,032,524 | I_kwDODunzps6pR3UM | 7,386 | Add bookfolder Dataset Builder for Digital Book Formats | {
"login": "shikanime",
"id": 22115108,
"node_id": "MDQ6VXNlcjIyMTE1MTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/22115108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shikanime",
"html_url": "https://github.com/shikanime",
"followers_url": "https://api.github.com/users/shikanime/followers",
"following_url": "https://api.github.com/users/shikanime/following{/other_user}",
"gists_url": "https://api.github.com/users/shikanime/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shikanime/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shikanime/subscriptions",
"organizations_url": "https://api.github.com/users/shikanime/orgs",
"repos_url": "https://api.github.com/users/shikanime/repos",
"events_url": "https://api.github.com/users/shikanime/events{/privacy}",
"received_events_url": "https://api.github.com/users/shikanime/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | 1 | 2025-02-08T14:27:55 | 2025-02-08T14:30:10 | 2025-02-08T14:30:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
This feature proposes adding a new dataset builder called bookfolder to the datasets library. This builder would allow users to easily load datasets consisting of various digital book formats, including: AZW, AZW3, CB7, CBR, CBT, CBZ, EPUB, MOBI, and PDF.
### Motivation
Currently, loading datasets of these digital book files requires manual effort. This would also lower the barrier to entry for working with these formats, enabling more diverse and interesting datasets to be used within the Hugging Face ecosystem.
### Your contribution
This feature is rather simple as it will be based on the folder-based builder, similar to imagefolder. I'm willing to contribute to this feature by submitting a PR | {
"login": "shikanime",
"id": 22115108,
"node_id": "MDQ6VXNlcjIyMTE1MTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/22115108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shikanime",
"html_url": "https://github.com/shikanime",
"followers_url": "https://api.github.com/users/shikanime/followers",
"following_url": "https://api.github.com/users/shikanime/following{/other_user}",
"gists_url": "https://api.github.com/users/shikanime/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shikanime/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shikanime/subscriptions",
"organizations_url": "https://api.github.com/users/shikanime/orgs",
"repos_url": "https://api.github.com/users/shikanime/repos",
"events_url": "https://api.github.com/users/shikanime/events{/privacy}",
"received_events_url": "https://api.github.com/users/shikanime/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7386/timeline | completed | null | null | false | [
"On second thought, probably not a good idea."
] |
https://api.github.com/repos/huggingface/datasets/issues/7385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7385/comments | https://api.github.com/repos/huggingface/datasets/issues/7385/events | https://github.com/huggingface/datasets/pull/7385 | 2,830,664,522 | PR_kwDODunzps6KBO6i | 7,385 | Make IterableDataset (optionally) resumable | {
"login": "yzhangcs",
"id": 18402347,
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzhangcs",
"html_url": "https://github.com/yzhangcs",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-02-04T15:55:33 | 2025-02-06T07:40:19 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### What does this PR do?
This PR introduces a new `stateful` option to the `dataset.shuffle` method, which defaults to `False`.
When enabled, this option allows for resumable shuffling of `IterableDataset` instances, albeit with some additional memory overhead.
Key points:
* All tests have passed
* Docstrings have been updated to reflect the new functionality
I'm very looking forward to receiving feedback on this implementation! @lhoestq | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7385/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7385",
"html_url": "https://github.com/huggingface/datasets/pull/7385",
"diff_url": "https://github.com/huggingface/datasets/pull/7385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7385.patch",
"merged_at": null
} | true | [
"@lhoestq Hi again~ Just circling back on this\r\nWondering if there’s anything I can do to help move this forward. 🤗 \r\nThanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/7384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7384/comments | https://api.github.com/repos/huggingface/datasets/issues/7384/events | https://github.com/huggingface/datasets/pull/7384 | 2,828,208,828 | PR_kwDODunzps6J4wVi | 7,384 | Support async functions in map() | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 2 | 2025-02-03T18:18:40 | 2025-02-13T14:01:13 | 2025-02-13T14:00:06 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | e.g. to download images or call an inference API like HF or vLLM
```python
import asyncio
import random
from datasets import Dataset
async def f(x):
await asyncio.sleep(random.random())
ds = Dataset.from_dict({"data": range(100)})
ds.map(f)
# Map: 100%|█████████████████████████████| 100/100 [00:01<00:00, 99.81 examples/s]
```
TODO
- [x] clean code (right now it's a big copy paste)
- [x] batched
- [x] Dataset.map()
- [x] IterableDataset.map()
- [x] Dataset.filter()
- [x] IterableDataset.filter()
- [x] test
- [x] docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7384/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7384/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7384",
"html_url": "https://github.com/huggingface/datasets/pull/7384",
"diff_url": "https://github.com/huggingface/datasets/pull/7384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7384.patch",
"merged_at": "2025-02-13T14:00:06"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7384). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"example of what you can do with it:\r\n\r\n```python\r\nimport aiohttp\r\nfrom huggingface_hub import get_token\r\n\r\nfrom datasets import Dataset\r\n\r\n\r\nAPI_URL = \"https://api-inference.huggingface.co/models/microsoft/Phi-3-mini-4k-instruct/v1/chat/completions\"\r\nPROMPT = \"What is this text mainly about ? Here is the text:\\n\\n```\\n{Problem}\\n```\\n\\nReply in one or two words.\"\r\n\r\nasync def query(example):\r\n headers = {\"Authorization\": f\"Bearer {get_token()}\", \"Content-Type\": \"application/json\"}\r\n json = {\"messages\": [{\"role\": \"user\", \"content\": PROMPT.format(Problem=example[\"Problem\"])}], \"max_tokens\": 20, \"seed\": 42}\r\n async with aiohttp.ClientSession() as session, session.post(API_URL, headers=headers, json=json) as response:\r\n output = await response.json()\r\n return {\"output\": output[\"choices\"][0][\"message\"][\"content\"]}\r\n\r\nds = Dataset.from_dict({\"Problem\": [\"1 + 1\"] * 10})\r\nds = ds.map(query)\r\nprint(ds[0])\r\n# {'Problem': '1 + 1', 'output': 'Arithmetic'}\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/7382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7382/comments | https://api.github.com/repos/huggingface/datasets/issues/7382/events | https://github.com/huggingface/datasets/pull/7382 | 2,823,480,924 | PR_kwDODunzps6Jo69f | 7,382 | Add Pandas, PyArrow and Polars docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 1 | 2025-01-31T13:22:59 | 2025-01-31T16:30:59 | 2025-01-31T16:30:57 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | (also added the missing numpy docs and fixed a small bug in pyarrow formatting) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7382/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7382",
"html_url": "https://github.com/huggingface/datasets/pull/7382",
"diff_url": "https://github.com/huggingface/datasets/pull/7382.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7382.patch",
"merged_at": "2025-01-31T16:30:57"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7382). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/7381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7381/comments | https://api.github.com/repos/huggingface/datasets/issues/7381/events | https://github.com/huggingface/datasets/issues/7381 | 2,815,649,092 | I_kwDODunzps6n02VE | 7,381 | Iterating over values of a column in the IterableDataset | {
"login": "TopCoder2K",
"id": 47208659,
"node_id": "MDQ6VXNlcjQ3MjA4NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TopCoder2K",
"html_url": "https://github.com/TopCoder2K",
"followers_url": "https://api.github.com/users/TopCoder2K/followers",
"following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}",
"gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions",
"organizations_url": "https://api.github.com/users/TopCoder2K/orgs",
"repos_url": "https://api.github.com/users/TopCoder2K/repos",
"events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}",
"received_events_url": "https://api.github.com/users/TopCoder2K/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | 2 | 2025-01-28T13:17:36 | 2025-02-18T17:15:51 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
I would like to be able to iterate (and re-iterate if needed) over a column of an `IterableDataset` instance. The following example shows the supposed API:
```
def gen():
yield {"text": "Good", "label": 0}
yield {"text": "Bad", "label": 1}
ds = IterableDataset.from_generator(gen)
texts = ds["text"]
for v in texts:
print(v) # Prints "Good" and "Bad"
for v in texts:
print(v) # Prints "Good" and "Bad" again
```
### Motivation
In the real world problems, huge NNs like Transformer are not always the best option, so there is a need to conduct experiments with different methods. While 🤗Datasets is perfectly adapted to 🤗Transformers, it may be inconvenient when being used with other libraries. The ability to retrieve a particular column is the case (e.g., gensim's FastText [requires](https://radimrehurek.com/gensim/models/fasttext.html#gensim.models.fasttext.FastText.train) only lists of strings, not dictionaries).
While there are ways to achieve the desired functionality, they are not good ([forum](https://discuss.huggingface.co/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649)). It would be great if there was a built-in solution.
### Your contribution
Theoretically, I can submit a PR, but I have very little knowledge of the internal structure of 🤗Datasets, so some help may be needed.
Moreover, I can only work on weekends, since I have a full-time job. However, the feature does not seem to be popular, so there is no need to implement it as fast as possible. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7381/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7381/timeline | null | null | null | false | [
"I'd be in favor of that ! I saw many people implementing their own iterables that wrap a dataset just to iterate on a single column, that would make things more practical.\n\nKinda related: https://github.com/huggingface/datasets/issues/5847",
"(For anyone's information, I'm going on vacation for the next 3 weeks, so the work is postponed. If anyone can implement this feature within the next 4 weeks, go ahead :) )"
] |
https://api.github.com/repos/huggingface/datasets/issues/7380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7380/comments | https://api.github.com/repos/huggingface/datasets/issues/7380/events | https://github.com/huggingface/datasets/pull/7380 | 2,811,566,116 | PR_kwDODunzps6JAkj5 | 7,380 | fix: dill default for version bigger 0.3.8 | {
"login": "sam-hey",
"id": 40773225,
"node_id": "MDQ6VXNlcjQwNzczMjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/40773225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-hey",
"html_url": "https://github.com/sam-hey",
"followers_url": "https://api.github.com/users/sam-hey/followers",
"following_url": "https://api.github.com/users/sam-hey/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-hey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-hey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-hey/subscriptions",
"organizations_url": "https://api.github.com/users/sam-hey/orgs",
"repos_url": "https://api.github.com/users/sam-hey/repos",
"events_url": "https://api.github.com/users/sam-hey/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-hey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 0 | 2025-01-26T13:37:16 | 2025-01-26T13:37:16 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | Fixes def log for dill version >= 0.3.9 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7380/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7380",
"html_url": "https://github.com/huggingface/datasets/pull/7380",
"diff_url": "https://github.com/huggingface/datasets/pull/7380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7380.patch",
"merged_at": null
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/7378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7378/comments | https://api.github.com/repos/huggingface/datasets/issues/7378/events | https://github.com/huggingface/datasets/issues/7378 | 2,802,957,388 | I_kwDODunzps6nEbxM | 7,378 | Allow pushing config version to hub | {
"login": "momeara",
"id": 129072,
"node_id": "MDQ6VXNlcjEyOTA3Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/129072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/momeara",
"html_url": "https://github.com/momeara",
"followers_url": "https://api.github.com/users/momeara/followers",
"following_url": "https://api.github.com/users/momeara/following{/other_user}",
"gists_url": "https://api.github.com/users/momeara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/momeara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/momeara/subscriptions",
"organizations_url": "https://api.github.com/users/momeara/orgs",
"repos_url": "https://api.github.com/users/momeara/repos",
"events_url": "https://api.github.com/users/momeara/events{/privacy}",
"received_events_url": "https://api.github.com/users/momeara/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | 1 | 2025-01-21T22:35:07 | 2025-01-30T13:56:56 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
Currently, when datasets are created, they can be versioned by passing the `version` argument to `load_dataset(...)`. For example creating `outcomes.csv` on the command line
```
echo "id,value\n1,0\n2,0\n3,1\n4,1\n" > outcomes.csv
```
and creating it
```
import datasets
dataset = datasets.load_dataset(
"csv",
data_files ="outcomes.csv",
keep_in_memory = True,
version = '1.0.0')
```
The version info is stored in the `info` and can be accessed e.g. by `next(iter(dataset.values())).info.version`
This dataset can be uploaded to the hub with `dataset.push_to_hub(repo_id = "maomlab/example_dataset")`. This will create a dataset on the hub with the following in the `README.md`, but it doesn't upload the version information:
```
---
dataset_info:
features:
- name: id
dtype: int64
- name: value
dtype: int64
splits:
- name: train
num_bytes: 64
num_examples: 4
download_size: 1332
dataset_size: 64
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
```
However, when I download from the hub, the version information is missing:
```
dataset_from_hub_no_version = datasets.load_dataset("maomlab/example_dataset")
next(iter(dataset.values())).info.version
```
I can add the version information manually to the hub, by appending it to the end of config section:
```
...
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
version: 1.0.0
---
```
And then when I download it, the version information is correct.
### Motivation
### Why adding version information for each config makes sense
1. The version information is already recorded in the dataset config info data structure and is able to parse it correctly, so it makes sense to sync it with `push_to_hub`.
2. Keeping the version info in at the config level is different from version info at the branch level. As the former relates to the version of the specific dataset the config refers to rather than the version of the dataset curation itself.
## A explanation for the current behavior:
In [datasets/src/datasets/info.py:159](https://github.com/huggingface/datasets/blob/fb91fd3c9ea91a818681a777faf8d0c46f14c680/src/datasets/info.py#L159C1-L160C1
), the `_INCLUDED_INFO_IN_YAML` variable doesn't include `"version"`.
If my reading of the code is right, adding `"version"` to `_INCLUDED_INFO_IN_YAML`, would allow the version information to be uploaded to the hub.
### Your contribution
Request: add `"version"` to `_INCLUDE_INFO_IN_YAML` in [datasets/src/datasets/info.py:159](https://github.com/huggingface/datasets/blob/fb91fd3c9ea91a818681a777faf8d0c46f14c680/src/datasets/info.py#L159C1-L160C1
)
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7378/timeline | null | null | null | false | [
"Hi ! This sounds reasonable to me, feel free to open a PR :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/7377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7377/comments | https://api.github.com/repos/huggingface/datasets/issues/7377/events | https://github.com/huggingface/datasets/issues/7377 | 2,802,723,285 | I_kwDODunzps6nDinV | 7,377 | Support for sparse arrays with the Arrow Sparse Tensor format? | {
"login": "JulesGM",
"id": 3231217,
"node_id": "MDQ6VXNlcjMyMzEyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JulesGM",
"html_url": "https://github.com/JulesGM",
"followers_url": "https://api.github.com/users/JulesGM/followers",
"following_url": "https://api.github.com/users/JulesGM/following{/other_user}",
"gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions",
"organizations_url": "https://api.github.com/users/JulesGM/orgs",
"repos_url": "https://api.github.com/users/JulesGM/repos",
"events_url": "https://api.github.com/users/JulesGM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JulesGM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | 1 | 2025-01-21T20:14:35 | 2025-01-30T14:06:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Feature request
AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**.
Arrow has support for sparse tensors.
https://arrow.apache.org/docs/format/Other.html#sparse-tensor
It would be a big deal if Hugging Face Datasets supported sparse tensors as a feature type, natively.
### Motivation
This is important for example in the field of transcriptomics (modeling and understanding gene expression), because a large fraction of the genes are not expressed (zero). More generally, in science, sparse arrays are very common, so adding support for them would be very benefitial, it would make just using Hugging Face Dataset objects a lot more straightforward and clean.
### Your contribution
We can discuss this further once the team comments of what they think about the feature, and if there were previous attempts at making it work, and understanding their evaluation of how hard it would be. My intuition is that it should be fairly straightforward, as the Arrow backend already supports it. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7377/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/7377/timeline | null | null | null | false | [
"Hi ! Unfortunately the Sparse Tensor structure in Arrow is not part of the Arrow format (yes it's confusing...), so it's not possible to use it in `datasets`. It's a separate structure that doesn't correspond to any type or extension type in Arrow.\n\nThe Arrow community recently added an extension type for fixed shape tensors at https://arrow.apache.org/docs/format/CanonicalExtensions.html#fixed-shape-tensor, it should be possible to contribute an extension type for sparse tensors as well."
] |
https://api.github.com/repos/huggingface/datasets/issues/7376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7376/comments | https://api.github.com/repos/huggingface/datasets/issues/7376/events | https://github.com/huggingface/datasets/pull/7376 | 2,802,621,104 | PR_kwDODunzps6IiO9j | 7,376 | [docs] uv install | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 0 | 2025-01-21T19:15:48 | 2025-01-21T19:39:29 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | Proposes adding uv to installation docs (see Slack thread [here](https://huggingface.slack.com/archives/C01N44FJDHT/p1737377177709279) for more context) if you're interested! | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7376/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7376",
"html_url": "https://github.com/huggingface/datasets/pull/7376",
"diff_url": "https://github.com/huggingface/datasets/pull/7376.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7376.patch",
"merged_at": null
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/7375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7375/comments | https://api.github.com/repos/huggingface/datasets/issues/7375/events | https://github.com/huggingface/datasets/issues/7375 | 2,800,609,218 | I_kwDODunzps6m7efC | 7,375 | vllm批量推理报错 | {
"login": "YuShengzuishuai",
"id": 51228154,
"node_id": "MDQ6VXNlcjUxMjI4MTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/51228154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuShengzuishuai",
"html_url": "https://github.com/YuShengzuishuai",
"followers_url": "https://api.github.com/users/YuShengzuishuai/followers",
"following_url": "https://api.github.com/users/YuShengzuishuai/following{/other_user}",
"gists_url": "https://api.github.com/users/YuShengzuishuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuShengzuishuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuShengzuishuai/subscriptions",
"organizations_url": "https://api.github.com/users/YuShengzuishuai/orgs",
"repos_url": "https://api.github.com/users/YuShengzuishuai/repos",
"events_url": "https://api.github.com/users/YuShengzuishuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuShengzuishuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-01-21T03:22:23 | 2025-01-30T14:02:40 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug

### Steps to reproduce the bug

### Expected behavior

### Environment info
 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7375/timeline | null | null | null | false | [
"Make sure you have installed a recent version of `soundfile`"
] |
https://api.github.com/repos/huggingface/datasets/issues/7374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7374/comments | https://api.github.com/repos/huggingface/datasets/issues/7374/events | https://github.com/huggingface/datasets/pull/7374 | 2,793,442,320 | PR_kwDODunzps6IC66n | 7,374 | Remove .h5 from imagefolder extensions | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 0 | 2025-01-16T18:17:24 | 2025-01-16T18:26:40 | 2025-01-16T18:26:38 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | the format is not relevant for imagefolder, and makes the viewer fail to process datasets on HF (so many that the viewer takes more time to process new datasets) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7374/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7374",
"html_url": "https://github.com/huggingface/datasets/pull/7374",
"diff_url": "https://github.com/huggingface/datasets/pull/7374.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7374.patch",
"merged_at": "2025-01-16T18:26:38"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/7373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7373/comments | https://api.github.com/repos/huggingface/datasets/issues/7373/events | https://github.com/huggingface/datasets/issues/7373 | 2,793,237,139 | I_kwDODunzps6mfWqT | 7,373 | Excessive RAM Usage After Dataset Concatenation concatenate_datasets | {
"login": "sam-hey",
"id": 40773225,
"node_id": "MDQ6VXNlcjQwNzczMjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/40773225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-hey",
"html_url": "https://github.com/sam-hey",
"followers_url": "https://api.github.com/users/sam-hey/followers",
"following_url": "https://api.github.com/users/sam-hey/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-hey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-hey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-hey/subscriptions",
"organizations_url": "https://api.github.com/users/sam-hey/orgs",
"repos_url": "https://api.github.com/users/sam-hey/repos",
"events_url": "https://api.github.com/users/sam-hey/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-hey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-01-16T16:33:10 | 2025-01-17T08:05:22 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
When loading a dataset from disk, concatenating it, and starting the training process, the RAM usage progressively increases until the kernel terminates the process due to excessive memory consumption.
https://github.com/huggingface/datasets/issues/2276
### Steps to reproduce the bug
```
rom datasets import DatasetDict, concatenate_datasets
dataset = DatasetDict.load_from_disk("data")
...
...
combined_dataset = concatenate_datasets(
[dataset[split] for split in dataset]
)
#start SentenceTransformer training
```
### Expected behavior
I would not expect RAM utilization to increase after concatenation. Removing the concatenation step resolves the issue
### Environment info
sentence-transformers==3.1.1
datasets==3.2.0
python3.10 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7373/timeline | null | null | null | false | [
"\n\n\n\nAdding a img from memray\nhttps://gist.github.com/sam-hey/00c958f13fb0f7b54d17197fe353002f"
] |
https://api.github.com/repos/huggingface/datasets/issues/7372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7372/comments | https://api.github.com/repos/huggingface/datasets/issues/7372/events | https://github.com/huggingface/datasets/issues/7372 | 2,791,760,968 | I_kwDODunzps6mZuRI | 7,372 | Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets | {
"login": "gaohongkui",
"id": 38203359,
"node_id": "MDQ6VXNlcjM4MjAzMzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/38203359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaohongkui",
"html_url": "https://github.com/gaohongkui",
"followers_url": "https://api.github.com/users/gaohongkui/followers",
"following_url": "https://api.github.com/users/gaohongkui/following{/other_user}",
"gists_url": "https://api.github.com/users/gaohongkui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaohongkui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaohongkui/subscriptions",
"organizations_url": "https://api.github.com/users/gaohongkui/orgs",
"repos_url": "https://api.github.com/users/gaohongkui/repos",
"events_url": "https://api.github.com/users/gaohongkui/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaohongkui/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 0 | 2025-01-16T05:47:20 | 2025-01-16T05:47:20 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Description
I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue:
#### Code 1: Using `load_dataset`
```python
from datasets import Dataset, load_dataset
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_dataset("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 1350 samples.
- `test` has 150 samples.
#### Code 2: Using `load_from_disk`
```python
from datasets import Dataset, load_from_disk
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_from_disk("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 450 samples.
- `test` has 50 samples.
### Expected Behavior
I expected both `load_dataset` and `load_from_disk` to load the same dataset, as they are pointing to the same directory. However, the results differ significantly:
- `load_dataset` seems to merge all shards, resulting in a combined dataset.
- `load_from_disk` only loads the last saved dataset, ignoring previous shards.
### Questions
1. Is this behavior intentional? If so, could you clarify the difference between `load_dataset` and `load_from_disk` in the documentation?
2. If this is not intentional, could this be considered a bug?
3. What is the recommended way to handle cases where multiple datasets are saved to the same directory?
Thank you for your time and effort in maintaining this great library! I look forward to your feedback. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7372/timeline | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/7371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7371/comments | https://api.github.com/repos/huggingface/datasets/issues/7371/events | https://github.com/huggingface/datasets/issues/7371 | 2,790,549,889 | I_kwDODunzps6mVGmB | 7,371 | 500 Server error with pushing a dataset | {
"login": "martinmatak",
"id": 7677814,
"node_id": "MDQ6VXNlcjc2Nzc4MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7677814?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martinmatak",
"html_url": "https://github.com/martinmatak",
"followers_url": "https://api.github.com/users/martinmatak/followers",
"following_url": "https://api.github.com/users/martinmatak/following{/other_user}",
"gists_url": "https://api.github.com/users/martinmatak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martinmatak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martinmatak/subscriptions",
"organizations_url": "https://api.github.com/users/martinmatak/orgs",
"repos_url": "https://api.github.com/users/martinmatak/repos",
"events_url": "https://api.github.com/users/martinmatak/events{/privacy}",
"received_events_url": "https://api.github.com/users/martinmatak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-01-15T18:23:02 | 2025-01-15T20:06:05 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
Suddenly, I started getting this error message saying it was an internal error.
`Error creating/pushing dataset: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main (Request ID: Root=1-6787f0b7-66d5bd45413e481c4c2fb22d;670d04ff-65f5-4741-a353-2eacc47a3928)
Internal Error - We're working hard to fix this as soon as possible!
Traceback (most recent call last):
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/uufs/chpc.utah.edu/common/home/u1295595/grasp_dataset_converter/src/grasp_dataset_converter/main.py", line 142, in main
subset_train.push_to_hub(dataset_name, split='train')
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5624, in push_to_hub
commit_info = api.create_commit(
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1518, in _inner
return fn(self, *args, **kwargs)
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 4087, in create_commit
hf_raise_for_status(commit_resp, endpoint_name="commit")
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main (Request ID: Root=1-6787f0b7-66d5bd45413e481c4c2fb22d;670d04ff-65f5-4741-a353-2eacc47a3928)
Internal Error - We're working hard to fix this as soon as possible!`
### Steps to reproduce the bug
I am pushing a Dataset in a loop via push_to_hub API
### Expected behavior
It worked fine until it stopped working suddenly.
Expected behavior: It should start working again
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.0
- `huggingface_hub` version: 0.27.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7371/timeline | null | null | null | false | [
"EDIT: seems to be all good now. I'll add a comment if the error happens again within the next 48 hours. If it doesn't, I'll just close the topic."
] |
https://api.github.com/repos/huggingface/datasets/issues/7370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7370/comments | https://api.github.com/repos/huggingface/datasets/issues/7370/events | https://github.com/huggingface/datasets/pull/7370 | 2,787,972,786 | PR_kwDODunzps6HwAu7 | 7,370 | Support faster processing using pandas or polars functions in `IterableDataset.map()` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | 2 | 2025-01-14T18:14:13 | 2025-01-31T11:08:15 | 2025-01-30T13:30:57 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | Following the polars integration :)
Allow super fast processing using pandas or polars functions in `IterableDataset.map()` by adding support to pandas and polars formatting in `IterableDataset`
```python
import polars as pl
from datasets import Dataset
ds = Dataset.from_dict({"i": range(10)}).to_iterable_dataset()
ds = ds.with_format("polars")
ds = ds.map(lambda df: df.with_columns(pl.col("i").add(1).alias("i+1")), batched=True)
ds = ds.with_format(None)
print(next(iter(ds)))
# {'i': 0, 'i+1': 1}
```
It leverages arrow's zero-copy features from/to pandas and polars.
related to https://github.com/huggingface/datasets/issues/3444 #6762 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7370/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7370",
"html_url": "https://github.com/huggingface/datasets/pull/7370",
"diff_url": "https://github.com/huggingface/datasets/pull/7370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7370.patch",
"merged_at": "2025-01-30T13:30:57"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7370). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"merging this and will make some docs and communications around using polars for optimizing data processing :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/7369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7369/comments | https://api.github.com/repos/huggingface/datasets/issues/7369/events | https://github.com/huggingface/datasets/issues/7369 | 2,787,193,238 | I_kwDODunzps6mITGW | 7,369 | Importing dataset gives unhelpful error message when filenames in metadata.csv are not found in the directory | {
"login": "svencornetsdegroot",
"id": 38278139,
"node_id": "MDQ6VXNlcjM4Mjc4MTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/38278139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svencornetsdegroot",
"html_url": "https://github.com/svencornetsdegroot",
"followers_url": "https://api.github.com/users/svencornetsdegroot/followers",
"following_url": "https://api.github.com/users/svencornetsdegroot/following{/other_user}",
"gists_url": "https://api.github.com/users/svencornetsdegroot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svencornetsdegroot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svencornetsdegroot/subscriptions",
"organizations_url": "https://api.github.com/users/svencornetsdegroot/orgs",
"repos_url": "https://api.github.com/users/svencornetsdegroot/repos",
"events_url": "https://api.github.com/users/svencornetsdegroot/events{/privacy}",
"received_events_url": "https://api.github.com/users/svencornetsdegroot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 1 | 2025-01-14T13:53:21 | 2025-01-14T15:05:51 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | ### Describe the bug
While importing an audiofolder dataset, where the names of the audiofiles don't correspond to the filenames in the metadata.csv, we get an unclear error message that is not helpful for the debugging, i.e.
```
ValueError: Instruction "train" corresponds to no data!
```
### Steps to reproduce the bug
Assume an audiofolder with audiofiles, filename1.mp3, filename2.mp3 etc and a file metadata.csv which contains the columns file_name and sentence. The file_names are formatted like filename1.mp3, filename2.mp3 etc.
Load the audio
```
from datasets import load_dataset
load_dataset("audiofolder", data_dir='/path/to/audiofolder')
```
When the file_names in the csv are not in sync with the filenames in the audiofolder, then we get an Error message:
```
File /opt/conda/lib/python3.12/site-packages/datasets/arrow_reader.py:251, in BaseReader.read(self, name, instructions, split_infos, in_memory)
249 if not files:
250 msg = f'Instruction "{instructions}" corresponds to no data!'
--> 251 raise ValueError(msg)
252 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
ValueError: Instruction "train" corresponds to no data!
```
load_dataset has a default value for the argument split = 'train'.
### Expected behavior
It would be better to get an error report something like:
```
The metadata.csv file has different filenames than the files in the datadirectory.
```
It would have saved me 4 hours of debugging.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.14.0-427.40.1.el9_4.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.8
- `huggingface_hub` version: 0.27.0
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7369/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7369/timeline | null | null | null | false | [
"I'd prefer even more verbose errors; like `\"file123.mp3\" is referenced in metadata.csv, but not found in the data directory '/path/to/audiofolder' ! (and 100+ more missing files)` Or something along those lines."
] |
https://api.github.com/repos/huggingface/datasets/issues/7368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7368/comments | https://api.github.com/repos/huggingface/datasets/issues/7368/events | https://github.com/huggingface/datasets/pull/7368 | 2,784,272,477 | PR_kwDODunzps6HjE97 | 7,368 | Add with_split to DatasetDict.map | {
"login": "jp1924",
"id": 93233241,
"node_id": "U_kgDOBY6gWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jp1924",
"html_url": "https://github.com/jp1924",
"followers_url": "https://api.github.com/users/jp1924/followers",
"following_url": "https://api.github.com/users/jp1924/following{/other_user}",
"gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jp1924/subscriptions",
"organizations_url": "https://api.github.com/users/jp1924/orgs",
"repos_url": "https://api.github.com/users/jp1924/repos",
"events_url": "https://api.github.com/users/jp1924/events{/privacy}",
"received_events_url": "https://api.github.com/users/jp1924/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | 5 | 2025-01-13T15:09:56 | 2025-02-21T07:50:27 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | #7356 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7368/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7368",
"html_url": "https://github.com/huggingface/datasets/pull/7368",
"diff_url": "https://github.com/huggingface/datasets/pull/7368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7368.patch",
"merged_at": null
} | true | [
"Can you check this out, @lhoestq?",
"cc @lhoestq @albertvillanova ",
"@lhoestq\r\n",
"@lhoestq\r\n",
"@lhoestq"
] |
End of preview. Expand
in Dataset Viewer.
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
- Curated by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
Dataset Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
[More Information Needed]
Who are the source data producers?
[More Information Needed]
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]
- Downloads last month
- 3