The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type timestamp[s] to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2223, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2224, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2001, in cast_array_to_feature
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2002, in <listcomp>
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2086, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1948, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type timestamp[s] to null
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

url
string
repository_url
string
labels_url
string
comments_url
string
events_url
string
html_url
string
id
int64
node_id
string
number
int64
title
string
user
dict
labels
list
state
string
locked
bool
assignee
null
assignees
list
milestone
null
comments
list
created_at
int64
updated_at
int64
closed_at
null
author_association
string
type
null
active_lock_reason
null
sub_issues_summary
dict
issue_dependencies_summary
dict
body
string
closed_by
null
reactions
dict
timeline_url
string
performed_via_github_app
null
state_reason
null
draft
float64
pull_request
dict
is_pull_request
bool
https://api.github.com/repos/huggingface/datasets/issues/7757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7757/comments
https://api.github.com/repos/huggingface/datasets/issues/7757/events
https://github.com/huggingface/datasets/issues/7757
3,389,535,011
I_kwDODunzps7KCDMj
7,757
Add support for `.conll` file format in datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/88763593?v=4", "events_url": "https://api.github.com/users/namesarnav/events{/privacy}", "followers_url": "https://api.github.com/users/namesarnav/followers", "following_url": "https://api.github.com/users/namesarnav/following{/other_user}", "gists_url": "https://api.github.com/users/namesarnav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/namesarnav", "id": 88763593, "login": "namesarnav", "node_id": "MDQ6VXNlcjg4NzYzNTkz", "organizations_url": "https://api.github.com/users/namesarnav/orgs", "received_events_url": "https://api.github.com/users/namesarnav/received_events", "repos_url": "https://api.github.com/users/namesarnav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/namesarnav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namesarnav/subscriptions", "type": "User", "url": "https://api.github.com/users/namesarnav", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
1,757,143,539,000
1,757,143,539,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Feature request I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems. Right now `.conll` datasets need to be manually parsed or preprocessed before being loaded into datasets. Having built in support would save time and make workflows smoother for researchers and practitioners. I propose - Add a conll dataset builder or file parser to datasets that can: - Read `.conll` files with customizable delimiters (space, tab). - Handle sentence/document boundaries (typically indicated by empty lines). - Support common CoNLL variants (e.g., CoNLL-2000 chunking, CoNLL-2003 NER). - Output a dataset where each example contains: - tokens: list of strings - tags (or similar): list of labels aligned with tokens Given a .conll snippet like: ``` EU NNP B-ORG rejects VBZ O German JJ B-MISC call NN O . . O ``` The dataset should load as: ``` { "tokens": ["EU", "rejects", "German", "call", "."], "tags": ["B-ORG", "O", "B-MISC", "O", "O"] } ``` ### Motivation - CoNLL files are a standard benchmark format in NLP (e.g., CoNLL-2003, CoNLL-2000). - Many users train NER or sequence labeling models (like BERT for token classification) directly on `.conll` - Right now you have to write your own parsing scripts. Built in support would unify this process and would be much more convenient ### Your contribution I’d be happy to contribute by implementing this feature. My plan is to- - Add a new dataset script (conll.py) to handle .conll files. - Implement parsing logic that supports sentence/document boundaries and token-label alignment. - Write unit tests with small `.conll` examples to ensure correctness. - Add documentation and usage examples so new users can easily load `.conll` datasets. This would be my first open source contribution, so I’ll follow the `CONTRIBUTING.md` guidelines closely and adjust based on feedback from the maintainers.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7757/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7757/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7756/comments
https://api.github.com/repos/huggingface/datasets/issues/7756/events
https://github.com/huggingface/datasets/issues/7756
3,387,076,693
I_kwDODunzps7J4rBV
7,756
datasets.map(f, num_proc=N) hangs with N>1 when run on import
{ "avatar_url": "https://avatars.githubusercontent.com/u/20065?v=4", "events_url": "https://api.github.com/users/arjunguha/events{/privacy}", "followers_url": "https://api.github.com/users/arjunguha/followers", "following_url": "https://api.github.com/users/arjunguha/following{/other_user}", "gists_url": "https://api.github.com/users/arjunguha/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arjunguha", "id": 20065, "login": "arjunguha", "node_id": "MDQ6VXNlcjIwMDY1", "organizations_url": "https://api.github.com/users/arjunguha/orgs", "received_events_url": "https://api.github.com/users/arjunguha/received_events", "repos_url": "https://api.github.com/users/arjunguha/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arjunguha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arjunguha/subscriptions", "type": "User", "url": "https://api.github.com/users/arjunguha", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,757,068,321,000
1,757,068,321,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs. ### Steps to reproduce the bug 1. Create a file that runs datasets.map at the top-level: ```bash cat <<EOF > import_me.py import datasets the_dataset = datasets.load_dataset("openai/openai_humaneval") the_dataset = the_dataset.map(lambda item: item, num_proc=2) EOF ``` 2. Start Python REPL: ```bash uv run --python 3.12.3 --with "datasets==4.0.0" python3 Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. ``` 3. Import the file: ```python import import_me ```` Observe hang. ### Expected behavior Ideally would not hang, or would fallback to num_proc=1 with a warning. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.34.4 - PyArrow version: 21.0.0 - Pandas version: 2.3.2 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7756/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7756/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7755/comments
https://api.github.com/repos/huggingface/datasets/issues/7755/events
https://github.com/huggingface/datasets/pull/7755
3,386,079,181
PR_kwDODunzps6m-MTU
7,755
Support pathlib.Path for feature input
{ "avatar_url": "https://avatars.githubusercontent.com/u/5422226?v=4", "events_url": "https://api.github.com/users/Joshua-Chin/events{/privacy}", "followers_url": "https://api.github.com/users/Joshua-Chin/followers", "following_url": "https://api.github.com/users/Joshua-Chin/following{/other_user}", "gists_url": "https://api.github.com/users/Joshua-Chin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Joshua-Chin", "id": 5422226, "login": "Joshua-Chin", "node_id": "MDQ6VXNlcjU0MjIyMjY=", "organizations_url": "https://api.github.com/users/Joshua-Chin/orgs", "received_events_url": "https://api.github.com/users/Joshua-Chin/received_events", "repos_url": "https://api.github.com/users/Joshua-Chin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Joshua-Chin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Joshua-Chin/subscriptions", "type": "User", "url": "https://api.github.com/users/Joshua-Chin", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,757,039,887,000
1,757,039,887,000
null
NONE
null
null
null
null
This PR adds support for specifying image, video, audio, and pdf features using `pathlib.Path`.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7755/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7755/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7755.diff", "html_url": "https://github.com/huggingface/datasets/pull/7755", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7755.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7755" }
true
https://api.github.com/repos/huggingface/datasets/issues/7754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7754/comments
https://api.github.com/repos/huggingface/datasets/issues/7754/events
https://github.com/huggingface/datasets/pull/7754
3,384,883,008
PR_kwDODunzps6m6qRo
7,754
Add columns support to JSON loader
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,757,010,086,000
1,757,010,086,000
null
CONTRIBUTOR
null
null
null
null
New fix to #7594 This PR adds support for the columns argument in the JSON dataset builder. Added columns parameter to JsonConfig. Applied column filtering after table creation, filling missing columns with None. Extended tests to cover: - Selecting a subset of columns - Handling missing requested columns - Column selection on list-of-strings case
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7754/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7754/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7754.diff", "html_url": "https://github.com/huggingface/datasets/pull/7754", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7754.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7754" }
true
https://api.github.com/repos/huggingface/datasets/issues/7753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7753/comments
https://api.github.com/repos/huggingface/datasets/issues/7753/events
https://github.com/huggingface/datasets/issues/7753
3,381,831,487
I_kwDODunzps7Jkqc_
7,753
datasets massively slows data reads, even in memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/1191040?v=4", "events_url": "https://api.github.com/users/lrast/events{/privacy}", "followers_url": "https://api.github.com/users/lrast/followers", "following_url": "https://api.github.com/users/lrast/following{/other_user}", "gists_url": "https://api.github.com/users/lrast/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lrast", "id": 1191040, "login": "lrast", "node_id": "MDQ6VXNlcjExOTEwNDA=", "organizations_url": "https://api.github.com/users/lrast/orgs", "received_events_url": "https://api.github.com/users/lrast/received_events", "repos_url": "https://api.github.com/users/lrast/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lrast/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lrast/subscriptions", "type": "User", "url": "https://api.github.com/users/lrast", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,756,950,324,000
1,756,950,324,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag. The following script reproduces the result with random data, but it applies equally to datasets that are loaded from the hub. ### Steps to reproduce the bug The following script should reproduce the behavior ``` import torch import time from datasets import Dataset images = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8) labels = torch.randint(0, 200, (1000,), dtype=torch.uint8) pt_dataset = torch.utils.data.TensorDataset(images, labels) hf_dataset = Dataset.from_dict({'image': images, 'label':labels}) hf_dataset.set_format('torch', dtype=torch.uint8) hf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True) # measure access speeds def time_access(dataset, img_col): start_time = time.time() for i in range(1000): _ = dataset[i][img_col].shape end_time = time.time() return end_time - start_time print(f"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds") print(f"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds") print(f"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds") ``` ### Expected behavior For me, the above script produces ``` In-memory Tensor access: 0.0025 seconds HF Dataset access: 2.9317 seconds In-memory HF Dataset access: 2.8082 seconds ``` I think that this difference is larger than expected. ### Environment info - `datasets` version: 4.0.0 - Platform: macOS-14.7.7-arm64-arm-64bit - Python version: 3.12.11 - `huggingface_hub` version: 0.34.3 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7753/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7753/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7752/comments
https://api.github.com/repos/huggingface/datasets/issues/7752/events
https://github.com/huggingface/datasets/pull/7752
3,358,374,882
PR_kwDODunzps6ljQLy
7,752
Fix: Update Dill Version in Setup py
{ "avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4", "events_url": "https://api.github.com/users/Navanit-git/events{/privacy}", "followers_url": "https://api.github.com/users/Navanit-git/followers", "following_url": "https://api.github.com/users/Navanit-git/following{/other_user}", "gists_url": "https://api.github.com/users/Navanit-git/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Navanit-git", "id": 98005188, "login": "Navanit-git", "node_id": "U_kgDOBddwxA", "organizations_url": "https://api.github.com/users/Navanit-git/orgs", "received_events_url": "https://api.github.com/users/Navanit-git/received_events", "repos_url": "https://api.github.com/users/Navanit-git/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Navanit-git/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Navanit-git/subscriptions", "type": "User", "url": "https://api.github.com/users/Navanit-git", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "https://github.com/huggingface/datasets/issues/7751" ]
1,756,280,391,000
1,756,280,861,000
null
NONE
null
null
null
null
Currently the DIll version is less than 3.9 and now major libraries like Multiprocess, gepa requires Dill version as 0.4.0 and this is making a conflict in installation. So added this small PR to update the DIll.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7752/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7752/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7752.diff", "html_url": "https://github.com/huggingface/datasets/pull/7752", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7752.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7752" }
true
https://api.github.com/repos/huggingface/datasets/issues/7751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7751/comments
https://api.github.com/repos/huggingface/datasets/issues/7751/events
https://github.com/huggingface/datasets/issues/7751
3,358,369,976
I_kwDODunzps7ILKi4
7,751
Dill version update
{ "avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4", "events_url": "https://api.github.com/users/Navanit-git/events{/privacy}", "followers_url": "https://api.github.com/users/Navanit-git/followers", "following_url": "https://api.github.com/users/Navanit-git/following{/other_user}", "gists_url": "https://api.github.com/users/Navanit-git/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Navanit-git", "id": 98005188, "login": "Navanit-git", "node_id": "U_kgDOBddwxA", "organizations_url": "https://api.github.com/users/Navanit-git/orgs", "received_events_url": "https://api.github.com/users/Navanit-git/received_events", "repos_url": "https://api.github.com/users/Navanit-git/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Navanit-git/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Navanit-git/subscriptions", "type": "User", "url": "https://api.github.com/users/Navanit-git", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "#7752 " ]
1,756,280,310,000
1,756,280,460,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug Why the datasets is not updating the dill ? Just want to know if I update the dill version in dill what will be the repucssion. For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets. Adding a pr too. ### Steps to reproduce the bug . ### Expected behavior . ### Environment info .
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7751/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7751/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7750/comments
https://api.github.com/repos/huggingface/datasets/issues/7750/events
https://github.com/huggingface/datasets/pull/7750
3,357,275,291
PR_kwDODunzps6lfwcx
7,750
Refactor: use unpacking in load.py for time and memory improvement
{ "avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4", "events_url": "https://api.github.com/users/brchristian/events{/privacy}", "followers_url": "https://api.github.com/users/brchristian/followers", "following_url": "https://api.github.com/users/brchristian/following{/other_user}", "gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brchristian", "id": 2460418, "login": "brchristian", "node_id": "MDQ6VXNlcjI0NjA0MTg=", "organizations_url": "https://api.github.com/users/brchristian/orgs", "received_events_url": "https://api.github.com/users/brchristian/received_events", "repos_url": "https://api.github.com/users/brchristian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brchristian/subscriptions", "type": "User", "url": "https://api.github.com/users/brchristian", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,756,246,391,000
1,756,246,391,000
null
CONTRIBUTOR
null
null
null
null
In `src/datasets/load.py`, we can use unpacking rather than concatenating two lists for improved time and memory performance. It’s a small improvement in absolute terms, but a consistent and measurable one: ```diff - ALL_ALLOWED_EXTENSIONS = list(_EXTENSION_TO_MODULE.keys()) + [".zip"] + ALL_ALLOWED_EXTENSIONS = [*_EXTENSION_TO_MODULE.keys(), ".zip"] ``` Benchmarking shows approximately 32.3% time improvement and 30.6% memory improvement. Example benchmarking script: ```python #!/usr/bin/env python3 """ Benchmark script to test performance of list(_EXTENSION_TO_MODULE.keys()) vs [*_EXTENSION_TO_MODULE.keys()] """ import time import tracemalloc from statistics import mean, stdev # Simulate _EXTENSION_TO_MODULE - based on actual size from datasets _EXTENSION_TO_MODULE = { f".ext{i}": f"module{i}" for i in range(20) # Realistic size } def method_old(): """Current implementation using list()""" return list(_EXTENSION_TO_MODULE.keys()) + [".zip"] def method_new(): """Proposed implementation using unpacking""" return [*_EXTENSION_TO_MODULE.keys(), ".zip"] def benchmark_time(func, iterations=100000): """Benchmark execution time""" times = [] for _ in range(10): # Multiple runs for accuracy start = time.perf_counter() for _ in range(iterations): func() end = time.perf_counter() times.append((end - start) / iterations * 1_000_000) # microseconds return mean(times), stdev(times) def benchmark_memory(func): """Benchmark peak memory usage""" tracemalloc.start() func() current, peak = tracemalloc.get_traced_memory() tracemalloc.stop() return peak if __name__ == "__main__": print("Benchmarking list() vs unpacking performance...\n") # Time benchmarks old_time, old_std = benchmark_time(method_old) new_time, new_std = benchmark_time(method_new) print(f"Time Performance (µs per operation):") print(f" list() approach: {old_time:.3f} ± {old_std:.3f}") print(f" unpacking approach: {new_time:.3f} ± {new_std:.3f}") print(f" Improvement: {((old_time - new_time) / old_time * 100):.1f}% faster") # Memory benchmarks old_mem = benchmark_memory(method_old) new_mem = benchmark_memory(method_new) print(f"\nMemory Usage (bytes):") print(f" list() approach: {old_mem}") print(f" unpacking approach: {new_mem}") print(f" Reduction: {old_mem - new_mem} bytes ({((old_mem - new_mem) / old_mem * 100):.1f}% less)") # Verify identical results assert method_old() == method_new(), "Results should be identical!" print(f"\n✓ Both methods produce identical results") ``` Results: ``` Benchmarking list() vs unpacking performance... Time Performance (µs per operation): list() approach: 0.213 ± 0.020 unpacking approach: 0.144 ± 0.002 Improvement: 32.3% faster Memory Usage (bytes): list() approach: 392 unpacking approach: 272 Reduction: 120 bytes (30.6% less) ✓ Both methods produce identical results ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7750/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7750/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7750.diff", "html_url": "https://github.com/huggingface/datasets/pull/7750", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7750.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7750" }
true
https://api.github.com/repos/huggingface/datasets/issues/7749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7749/comments
https://api.github.com/repos/huggingface/datasets/issues/7749/events
https://github.com/huggingface/datasets/pull/7749
3,356,567,923
PR_kwDODunzps6lddDW
7,749
Fix typo in error message for cache directory deletion
{ "avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4", "events_url": "https://api.github.com/users/brchristian/events{/privacy}", "followers_url": "https://api.github.com/users/brchristian/followers", "following_url": "https://api.github.com/users/brchristian/following{/other_user}", "gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brchristian", "id": 2460418, "login": "brchristian", "node_id": "MDQ6VXNlcjI0NjA0MTg=", "organizations_url": "https://api.github.com/users/brchristian/orgs", "received_events_url": "https://api.github.com/users/brchristian/received_events", "repos_url": "https://api.github.com/users/brchristian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brchristian/subscriptions", "type": "User", "url": "https://api.github.com/users/brchristian", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,756,230,442,000
1,756,230,442,000
null
CONTRIBUTOR
null
null
null
null
This PR fixes a small typo in an error message in `src/datasets/fingerprint.py`: https://github.com/huggingface/datasets/blob/910fab20606893f69b4fccac5fcc883dddf5a14d/src/datasets/fingerprint.py#L63 ```diff - occured + occurred ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7749/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7749/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7749.diff", "html_url": "https://github.com/huggingface/datasets/pull/7749", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7749.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7749" }
true
https://api.github.com/repos/huggingface/datasets/issues/7748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7748/comments
https://api.github.com/repos/huggingface/datasets/issues/7748/events
https://github.com/huggingface/datasets/pull/7748
3,347,137,663
PR_kwDODunzps6k-adX
7,748
docs: Streaming best practices
{ "avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4", "events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}", "followers_url": "https://api.github.com/users/Abdul-Omira/followers", "following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}", "gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Abdul-Omira", "id": 32625230, "login": "Abdul-Omira", "node_id": "MDQ6VXNlcjMyNjI1MjMw", "organizations_url": "https://api.github.com/users/Abdul-Omira/orgs", "received_events_url": "https://api.github.com/users/Abdul-Omira/received_events", "repos_url": "https://api.github.com/users/Abdul-Omira/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions", "type": "User", "url": "https://api.github.com/users/Abdul-Omira", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,755,908,323,000
1,757,212,416,000
null
NONE
null
null
null
null
Add a new 'Streaming best practices' page with practical patterns and pitfalls for large-scale/production use of IterableDataset. Includes examples for batched map with remove_columns, deterministic shuffling with set_epoch, multi-worker sharding, checkpoint/resume, and persistence to Parquet/Hub. Linked from How-to > General usage, next to Stream.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7748/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7748.diff", "html_url": "https://github.com/huggingface/datasets/pull/7748", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7748" }
true
https://api.github.com/repos/huggingface/datasets/issues/7747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7747/comments
https://api.github.com/repos/huggingface/datasets/issues/7747/events
https://github.com/huggingface/datasets/pull/7747
3,347,098,038
PR_kwDODunzps6k-Rtd
7,747
Add wikipedia-2023-redirects dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4", "events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}", "followers_url": "https://api.github.com/users/Abdul-Omira/followers", "following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}", "gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Abdul-Omira", "id": 32625230, "login": "Abdul-Omira", "node_id": "MDQ6VXNlcjMyNjI1MjMw", "organizations_url": "https://api.github.com/users/Abdul-Omira/orgs", "received_events_url": "https://api.github.com/users/Abdul-Omira/received_events", "repos_url": "https://api.github.com/users/Abdul-Omira/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions", "type": "User", "url": "https://api.github.com/users/Abdul-Omira", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,755,906,593,000
1,757,212,438,000
null
NONE
null
null
null
null
Title: Add wikipedia-2023-redirects dataset (redirect resolution + pageviews) Summary - New dataset loader: wikipedia_2023_redirects - Canonical Wikipedia pages enriched with: - redirects (aliases pointing to the page) - 2023 pageviews (aggregated) - Streaming support; robust parsing; license notes included - Tests with tiny dummy data (XML + TSVs); covers streaming Motivation RAG/retrieval often benefits from: - Query expansion via redirect aliases - Popularity prior via pageviews This loader offers a practical, maintenance-light way to access canonical pages alongside their redirect aliases and 2023 pageview totals. Features - id: string - title: string - url: string - text: string - redirects: list[string] - pageviews_2023: int32 - timestamp: string Licensing - Wikipedia text: CC BY-SA 3.0 (attribution and share-alike apply) - Pageviews: public domain The PR docs mention both, and the module docstring cites sources. Notes - The URLs in _get_urls_for_config are wired to dummy files for tests. In production, these would point to Wikimedia dumps: - XML page dumps: https://dumps.wikimedia.org/ - Pageviews: https://dumps.wikimedia.org/other/pageviews/ - The schema is intentionally simple and stable. Pageview aggregation is per-title sum across 2023. Testing - make style && make quality - pytest -q tests/test_dataset_wikipedia_2023_redirects.py Example ```python from datasets import load_dataset ds = load_dataset("wikipedia_2023_redirects", split="train") print(ds[0]["title"], ds[0]["redirects"][:5], ds[0]["pageviews_2023"]) ``` Acknowledgements - Wikipedia/Wikimedia Foundation for the source data - Hugging Face Datasets for the dataset infrastructure
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7747/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7747/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7747.diff", "html_url": "https://github.com/huggingface/datasets/pull/7747", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7747" }
true
https://api.github.com/repos/huggingface/datasets/issues/7746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7746/comments
https://api.github.com/repos/huggingface/datasets/issues/7746/events
https://github.com/huggingface/datasets/issues/7746
3,345,391,211
I_kwDODunzps7HZp5r
7,746
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
{ "avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4", "events_url": "https://api.github.com/users/Awesome075/events{/privacy}", "followers_url": "https://api.github.com/users/Awesome075/followers", "following_url": "https://api.github.com/users/Awesome075/following{/other_user}", "gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Awesome075", "id": 187888489, "login": "Awesome075", "node_id": "U_kgDOCzLzaQ", "organizations_url": "https://api.github.com/users/Awesome075/orgs", "received_events_url": "https://api.github.com/users/Awesome075/received_events", "repos_url": "https://api.github.com/users/Awesome075/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions", "type": "User", "url": "https://api.github.com/users/Awesome075", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@sayakpaul @a-r-r-o-w could you verify this issue then i can contribute to solve this issue!😊" ]
1,755,867,123,000
1,756,326,215,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
Hi, The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter. The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed. I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly: **[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)** Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository. This action would fix the dataset for all users and ensure its continued availability. Thank you!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7746/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7745/comments
https://api.github.com/repos/huggingface/datasets/issues/7745/events
https://github.com/huggingface/datasets/issues/7745
3,345,286,773
I_kwDODunzps7HZQZ1
7,745
Audio mono argument no longer supported, despite class documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4", "events_url": "https://api.github.com/users/jheitz/events{/privacy}", "followers_url": "https://api.github.com/users/jheitz/followers", "following_url": "https://api.github.com/users/jheitz/following{/other_user}", "gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jheitz", "id": 5666041, "login": "jheitz", "node_id": "MDQ6VXNlcjU2NjYwNDE=", "organizations_url": "https://api.github.com/users/jheitz/orgs", "received_events_url": "https://api.github.com/users/jheitz/received_events", "repos_url": "https://api.github.com/users/jheitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jheitz/subscriptions", "type": "User", "url": "https://api.github.com/users/jheitz", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "I want to solve this problem can you please assign it to me\nand also can you please guide whether the mono parameter is required to be re-added or the documentation needs an update?" ]
1,755,864,941,000
1,756,059,761,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono) ### Steps to reproduce the bug Audio(sampling_rate=16000, mono=True) raises the error TypeError: Audio.__init__() got an unexpected keyword argument 'mono' However, in the class documentation, is says: Args: sampling_rate (`int`, *optional*): Target sampling rate. If `None`, the native sampling rate is used. mono (`bool`, defaults to `True`): Whether to convert the audio signal to mono by averaging samples across channels. [...] ### Expected behavior The above call should either work, or the documentation within the Audio class should be updated ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35 - Python version: 3.12.11 - `huggingface_hub` version: 0.34.4 - PyArrow version: 21.0.0 - Pandas version: 2.3.2 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7745/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7744/comments
https://api.github.com/repos/huggingface/datasets/issues/7744/events
https://github.com/huggingface/datasets/issues/7744
3,343,510,686
I_kwDODunzps7HSeye
7,744
dtype: ClassLabel is not parsed correctly in `features.py`
{ "avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4", "events_url": "https://api.github.com/users/cmatKhan/events{/privacy}", "followers_url": "https://api.github.com/users/cmatKhan/followers", "following_url": "https://api.github.com/users/cmatKhan/following{/other_user}", "gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cmatKhan", "id": 43553003, "login": "cmatKhan", "node_id": "MDQ6VXNlcjQzNTUzMDAz", "organizations_url": "https://api.github.com/users/cmatKhan/orgs", "received_events_url": "https://api.github.com/users/cmatKhan/received_events", "repos_url": "https://api.github.com/users/cmatKhan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions", "type": "User", "url": "https://api.github.com/users/cmatKhan", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,755,818,930,000
1,755,818,930,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail. This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error): ```yaml license: mit pretty_name: BrentLab Yeast Genome Resources size_categories: - 1K<n<10K language: - en dataset_info: features: - name: start dtype: int32 description: Start coordinate (1-based, **inclusive**) - name: end dtype: int32 description: End coordinate (1-based, **inclusive**) - name: strand dtype: ClassLabel ... ``` is producing the following error in the data viewer: ``` Error code: ConfigNamesError Exception: ValueError Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf'] Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory return HubDatasetModuleFactory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list return cls.from_dict(from_yaml_inner(yaml_data)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict obj = generate_from_dict(dic) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}") ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf'] ``` I think that this is caused by this line https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013 Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py) ```python import itertools import os import re _uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])") _lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])") _single_underscore_re = re.compile(r"(?<!_)_(?!_)") _multiple_underscores_re = re.compile(r"(_{2,})") _split_re = r"^\w+(\.\w+)*$" def snakecase_to_camelcase(name): """Convert snake-case string to camel-case string.""" name = _single_underscore_re.split(name) name = [_multiple_underscores_re.split(n) for n in name] return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "") snakecase_to_camelcase("ClassLabel") ``` Result: ```raw 'Classlabel' ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7744/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7743/comments
https://api.github.com/repos/huggingface/datasets/issues/7743/events
https://github.com/huggingface/datasets/pull/7743
3,342,611,297
PR_kwDODunzps6ku8Jw
7,743
Refactor HDF5 and preserve tree structure
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@lhoestq this is ready for you now!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7743). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,755,797,297,000
1,756,222,085,000
null
CONTRIBUTOR
null
null
null
null
Closes #7741. Followup to #7690 - Recursive parsing and feature inference, to preserve the tree structure of the file. Note this means we now visit all links in the file. It also means we have to call` combine_chunks` on any large non-root datasets. - Support for `complex64` (two `float32`s, used to be converted to two `float64`s) - Support for ndim complex, compound, more field types for compound (due to reusing the main parser, compound types are treated like groups) - Cleaned up varlen support - Always do feature inference and always cast to features (used to cast to schema) - Updated tests to use `load_dataset` instead of internal APIs - Removed `columns` in config. Have to give Features (i.e., must specify types) if filtering
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7743/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7743/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7743.diff", "html_url": "https://github.com/huggingface/datasets/pull/7743", "merged_at": "2025-08-26T15:28:05", "patch_url": "https://github.com/huggingface/datasets/pull/7743.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7743" }
true
https://api.github.com/repos/huggingface/datasets/issues/7742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7742/comments
https://api.github.com/repos/huggingface/datasets/issues/7742/events
https://github.com/huggingface/datasets/issues/7742
3,336,704,928
I_kwDODunzps7G4hOg
7,742
module 'pyarrow' has no attribute 'PyExtensionType'
{ "avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4", "events_url": "https://api.github.com/users/mnedelko/events{/privacy}", "followers_url": "https://api.github.com/users/mnedelko/followers", "following_url": "https://api.github.com/users/mnedelko/following{/other_user}", "gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mnedelko", "id": 6106392, "login": "mnedelko", "node_id": "MDQ6VXNlcjYxMDYzOTI=", "organizations_url": "https://api.github.com/users/mnedelko/orgs", "received_events_url": "https://api.github.com/users/mnedelko/received_events", "repos_url": "https://api.github.com/users/mnedelko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions", "type": "User", "url": "https://api.github.com/users/mnedelko", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Just checked out the files and thishad already been addressed" ]
1,755,670,473,000
1,755,671,027,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug When importing certain libraries, users will encounter the following error which can be traced back to the datasets library. module 'pyarrow' has no attribute 'PyExtensionType'. Example issue: https://github.com/explodinggradients/ragas/issues/2170 The issue occurs due to the following. I will proceed to submit a PR with the below fix: **Issue Reason** The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions. ** Issue Solution** Making the following changes to the following lib files should temporarily resolve the issue. I will submit a PR to the dataets library in the meantime. env_name/lib/python3.10/site-packages/datasets/features/features.py: ``` > 521 self.shape = tuple(shape) 522 self.value_type = dtype 523 self.storage_dtype = self._generate_dtype(self.value_type) 524 - pa.PyExtensionType.__init__(self, self.storage_dtype) 524 + pa.ExtensionType.__init__(self, self.storage_dtype) 525 526 def __reduce__(self): 527 return self.__class__, ( ``` Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py: ``` 510 _type: str = field(default=“Array5D”, init=False, repr=False) 511 512 513 - class _ArrayXDExtensionType(pa.PyExtensionType): 513 + class _ArrayXDExtensionType(pa.ExtensionType): 514 ndims: Optional[int] = None 515 516 def __init__(self, shape: tuple, dtype: str): ``` ### Steps to reproduce the bug Ragas version: 0.3.1 Python version: 3.11 **Code to Reproduce** _**In notebook:**_ !pip install ragas from ragas import evaluate ### Expected behavior The required package installs without issue. ### Environment info In Jupyter Notebook. venv
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7742/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7741/comments
https://api.github.com/repos/huggingface/datasets/issues/7741/events
https://github.com/huggingface/datasets/issues/7741
3,334,848,656
I_kwDODunzps7GxcCQ
7,741
Preserve tree structure when loading HDF5
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[]
1,755,618,125,000
1,756,222,086,000
null
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Feature request https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374 ### Motivation `datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user. ### Your contribution I'll open a PR (#7743)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7741/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7740/comments
https://api.github.com/repos/huggingface/datasets/issues/7740/events
https://github.com/huggingface/datasets/pull/7740
3,334,693,293
PR_kwDODunzps6kUMKM
7,740
Document HDF5 support
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@lhoestq any guidance on what else to add/feedback on what is there now? It seems a bit minimal, but I don't think it's worth doing an entire page on HDF5?" ]
1,755,615,184,000
1,756,918,519,000
null
CONTRIBUTOR
null
null
null
null
I think these are at least the main places where we should put content. Ideally it is not just repeated in the final version ref #7690 - [x] Wait for #7743 to land
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7740/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7740/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/7740.diff", "html_url": "https://github.com/huggingface/datasets/pull/7740", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7740" }
true
https://api.github.com/repos/huggingface/datasets/issues/7739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7739/comments
https://api.github.com/repos/huggingface/datasets/issues/7739/events
https://github.com/huggingface/datasets/issues/7739
3,331,537,762
I_kwDODunzps7Gkzti
7,739
Replacement of "Sequence" feature with "List" breaks backward compatibility
{ "avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4", "events_url": "https://api.github.com/users/evmaki/events{/privacy}", "followers_url": "https://api.github.com/users/evmaki/followers", "following_url": "https://api.github.com/users/evmaki/following{/other_user}", "gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/evmaki", "id": 15764776, "login": "evmaki", "node_id": "MDQ6VXNlcjE1NzY0Nzc2", "organizations_url": "https://api.github.com/users/evmaki/orgs", "received_events_url": "https://api.github.com/users/evmaki/received_events", "repos_url": "https://api.github.com/users/evmaki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/evmaki/subscriptions", "type": "User", "url": "https://api.github.com/users/evmaki", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,755,538,118,000
1,755,538,118,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility. Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons. Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7739/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7738/comments
https://api.github.com/repos/huggingface/datasets/issues/7738/events
https://github.com/huggingface/datasets/issues/7738
3,328,948,690
I_kwDODunzps7Ga7nS
7,738
Allow saving multi-dimensional ndarray with dynamic shapes
{ "avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4", "events_url": "https://api.github.com/users/ryan-minato/events{/privacy}", "followers_url": "https://api.github.com/users/ryan-minato/followers", "following_url": "https://api.github.com/users/ryan-minato/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ryan-minato", "id": 82735346, "login": "ryan-minato", "node_id": "MDQ6VXNlcjgyNzM1MzQ2", "organizations_url": "https://api.github.com/users/ryan-minato/orgs", "received_events_url": "https://api.github.com/users/ryan-minato/received_events", "repos_url": "https://api.github.com/users/ryan-minato/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions", "type": "User", "url": "https://api.github.com/users/ryan-minato", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "I agree this would be super valuable.\n\nIt looks like this was discussed a few years ago in https://github.com/huggingface/datasets/issues/5272#issuecomment-1550200824 but there were some issues. Those PRs are merged now and it looks like Arrow [officially supports](https://arrow.apache.org/docs/format/CanonicalExtensions.html#variable-shape-tensor) this so it's a good time to re-evaluate!", "Happy to help with this, maybe we can think of adding a new type `Tensor` (instead of Array2D, 3D etc. which imply a fixed number of dims - we can keep them for backward compat anyways) that uses VariableShapeTensor (or FixedShapeTensor if the shape is provided maybe ? happy to discuss this)" ]
1,755,483,831,000
1,756,221,902,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Feature request I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed. A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example, ```python { "shape": (5, 224, 224), "dtype": "uint8", "data": [...] } ``` This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema. ### Motivation I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable. The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files. https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614 A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations). ### Your contribution I am willing to create a PR to help implement this feature if the proposal is accepted.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7738/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7737/comments
https://api.github.com/repos/huggingface/datasets/issues/7737/events
https://github.com/huggingface/datasets/pull/7737
3,318,670,801
PR_kwDODunzps6jf5io
7,737
docs: Add column overwrite example to batch mapping guide
{ "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4", "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}", "followers_url": "https://api.github.com/users/Sanjaykumar030/followers", "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}", "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sanjaykumar030", "id": 183703408, "login": "Sanjaykumar030", "node_id": "U_kgDOCvMXcA", "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs", "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events", "repos_url": "https://api.github.com/users/Sanjaykumar030/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions", "type": "User", "url": "https://api.github.com/users/Sanjaykumar030", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, just a gentle follow-up on this PR." ]
1,755,094,819,000
1,756,984,297,000
null
CONTRIBUTOR
null
null
null
null
This PR adds a complementary example showing the **column-overwriting** pattern, which is both more direct and more flexible for many transformations. ### Proposed Change The original `remove_columns` example remains untouched. Below it, this PR introduces an alternative approach that overwrites an existing column during batch mapping. This teaches users a core `.map()` capability for in-place transformations without extra intermediate steps. **New Example:** > ```python > >>> from datasets import Dataset > >>> dataset = Dataset.from_dict({"a": [0, 1, 2]}) > # Overwrite "a" directly to duplicate each value > >>> duplicated_dataset = dataset.map( > ... lambda batch: {"a": [x for x in batch["a"] for _ in range(2)]}, > ... batched=True > ... ) > >>> duplicated_dataset > Dataset({ > features: ['a'], > num_rows: 6 > }) > >>> duplicated_dataset["a"] > [0, 0, 1, 1, 2, 2] > ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7737/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7737/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7737.diff", "html_url": "https://github.com/huggingface/datasets/pull/7737", "merged_at": "2025-09-04T11:11:37", "patch_url": "https://github.com/huggingface/datasets/pull/7737.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7737" }
true
https://api.github.com/repos/huggingface/datasets/issues/7736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7736/comments
https://api.github.com/repos/huggingface/datasets/issues/7736/events
https://github.com/huggingface/datasets/pull/7736
3,311,618,096
PR_kwDODunzps6jIWQ3
7,736
Fix type hint `train_test_split`
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7736). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,754,945,213,000
1,755,090,830,000
null
MEMBER
null
null
null
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7736/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7736.diff", "html_url": "https://github.com/huggingface/datasets/pull/7736", "merged_at": "2025-08-13T13:13:48", "patch_url": "https://github.com/huggingface/datasets/pull/7736.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7736" }
true
https://api.github.com/repos/huggingface/datasets/issues/7735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7735/comments
https://api.github.com/repos/huggingface/datasets/issues/7735/events
https://github.com/huggingface/datasets/pull/7735
3,310,514,828
PR_kwDODunzps6jEq5w
7,735
fix largelist repr
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7735). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,754,925,462,000
1,754,926,796,000
null
MEMBER
null
null
null
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7735/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7735/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7735.diff", "html_url": "https://github.com/huggingface/datasets/pull/7735", "merged_at": "2025-08-11T15:39:54", "patch_url": "https://github.com/huggingface/datasets/pull/7735.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7735" }
true
https://api.github.com/repos/huggingface/datasets/issues/7734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7734/comments
https://api.github.com/repos/huggingface/datasets/issues/7734/events
https://github.com/huggingface/datasets/pull/7734
3,306,519,239
PR_kwDODunzps6i4pmA
7,734
Fixing __getitem__ of datasets which behaves inconsistent to documentation when setting _format_type to None
{ "avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4", "events_url": "https://api.github.com/users/awagen/events{/privacy}", "followers_url": "https://api.github.com/users/awagen/followers", "following_url": "https://api.github.com/users/awagen/following{/other_user}", "gists_url": "https://api.github.com/users/awagen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/awagen", "id": 40367113, "login": "awagen", "node_id": "MDQ6VXNlcjQwMzY3MTEz", "organizations_url": "https://api.github.com/users/awagen/orgs", "received_events_url": "https://api.github.com/users/awagen/received_events", "repos_url": "https://api.github.com/users/awagen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awagen/subscriptions", "type": "User", "url": "https://api.github.com/users/awagen", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "this breaking change is actually expected, happy to help with a fix in sentencetransformers to account for this", "Thank you for the context. I thought this was a mismatch do the documentation. Good to know it was intentional. No worries, can add a PR to sentence transformers." ]
1,754,754,774,000
1,755,415,380,000
null
NONE
null
null
null
null
Setting _format_type to None, should return plain python object but as of 4.0.0 returns Column. This fails in libs such as sentencetransformers (such as in generation of hard negatives) where plain python is expected.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7734/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7734/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7734.diff", "html_url": "https://github.com/huggingface/datasets/pull/7734", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7734.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7734" }
true
https://api.github.com/repos/huggingface/datasets/issues/7733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7733/comments
https://api.github.com/repos/huggingface/datasets/issues/7733/events
https://github.com/huggingface/datasets/issues/7733
3,304,979,299
I_kwDODunzps7E_ftj
7,733
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
{ "avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4", "events_url": "https://api.github.com/users/dennys246/events{/privacy}", "followers_url": "https://api.github.com/users/dennys246/followers", "following_url": "https://api.github.com/users/dennys246/following{/other_user}", "gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dennys246", "id": 27898715, "login": "dennys246", "node_id": "MDQ6VXNlcjI3ODk4NzE1", "organizations_url": "https://api.github.com/users/dennys246/orgs", "received_events_url": "https://api.github.com/users/dennys246/received_events", "repos_url": "https://api.github.com/users/dennys246/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dennys246/subscriptions", "type": "User", "url": "https://api.github.com/users/dennys246", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />" ]
1,754,680,258,000
1,754,960,098,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled). I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data). Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality ### Steps to reproduce the bug 1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer. 2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string ` --- dataset_info: features: - name: image dtype: Image - name: file_path dtype: Image ` 3. Initialize the dataset locally, make sure your working directory is not the dataset directory root `dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)` 4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path ` >>> dataset['train'][0] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__ return self._getitem(key) ^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem formatted_output = format_table( ^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__ return self.format_row(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row row = self.python_features_decoder.decode_row(row) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example image = PIL.Image.open(path) ^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open fp = builtins.open(filename, "rb") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png' ` ### Expected behavior I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path. Instead it appears to load from my current working directory + relative path. ### Environment info Tested on… Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2 datasets version 4.0.0 Python 3.12 and 3.13
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7733/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7732/comments
https://api.github.com/repos/huggingface/datasets/issues/7732/events
https://github.com/huggingface/datasets/issues/7732
3,304,673,383
I_kwDODunzps7E-VBn
7,732
webdataset: key errors when `field_name` has upper case characters
{ "avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4", "events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}", "followers_url": "https://api.github.com/users/YassineYousfi/followers", "following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}", "gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YassineYousfi", "id": 29985433, "login": "YassineYousfi", "node_id": "MDQ6VXNlcjI5OTg1NDMz", "organizations_url": "https://api.github.com/users/YassineYousfi/orgs", "received_events_url": "https://api.github.com/users/YassineYousfi/received_events", "repos_url": "https://api.github.com/users/YassineYousfi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions", "type": "User", "url": "https://api.github.com/users/YassineYousfi", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,754,672,202,000
1,754,672,202,000
null
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug When using a webdataset each sample can be a collection of different "fields" like this: ``` images17/image194.left.jpg images17/image194.right.jpg images17/image194.json images17/image12.left.jpg images17/image12.right.jpg images17/image12.json ``` if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset: e.g. from a dataset (now updated so that it doesn't throw this error) ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[1], line 2 1 from datasets import load_dataset ----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1) File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs) 1409 return builder_instance.as_streaming_dataset(split=split) 1411 # Download and prepare data -> 1412 builder_instance.download_and_prepare( 1413 download_config=download_config, 1414 download_mode=download_mode, 1415 verification_mode=verification_mode, 1416 num_proc=num_proc, 1417 storage_options=storage_options, 1418 ) 1420 # Build dataset for splits 1421 keep_in_memory = ( 1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1423 ) File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 892 if num_proc is not None: 893 prepare_split_kwargs["num_proc"] = num_proc --> 894 self._download_and_prepare( 895 dl_manager=dl_manager, 896 verification_mode=verification_mode, 897 **prepare_split_kwargs, 898 **download_and_prepare_kwargs, 899 ) 900 # Sync info 901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1609 super()._download_and_prepare( 1610 dl_manager, 1611 verification_mode, 1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1613 or verification_mode == VerificationMode.ALL_CHECKS, 1614 **prepare_splits_kwargs, 1615 ) File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 946 split_dict = SplitDict(dataset_name=self.dataset_name) 947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 950 # Checksums verification 951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager) 78 if not self.info.features: 79 # Get one example to get the feature types 80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0]) ---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE)) 82 if any(example.keys() != first_examples[0].keys() for example in first_examples): 83 raise ValueError( 84 "The TAR archives of the dataset should be in WebDataset format, " 85 "but the files in the archive don't share the same prefix or the same types." 86 ) File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator) 53 data_extension = field_name.split(".")[-1] 54 if data_extension in cls.DECODERS: ---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name]) 56 if current_example: 57 yield current_example KeyError: 'processed_log_IMU_magnetometer_value.npy' ``` ### Steps to reproduce the bug unit test was added in: https://github.com/huggingface/datasets/pull/7726 it fails without the fixed proposed in the same PR ### Expected behavior Not throwing a key error. ### Environment info ``` - `datasets` version: 4.0.0 - Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39 - Python version: 3.11.4 - `huggingface_hub` version: 0.33.4 - PyArrow version: 21.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.7.0 ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7732/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7731/comments
https://api.github.com/repos/huggingface/datasets/issues/7731/events
https://github.com/huggingface/datasets/issues/7731
3,303,637,075
I_kwDODunzps7E6YBT
7,731
Add the possibility of a backend for audio decoding
{ "avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4", "events_url": "https://api.github.com/users/intexcor/events{/privacy}", "followers_url": "https://api.github.com/users/intexcor/followers", "following_url": "https://api.github.com/users/intexcor/following{/other_user}", "gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/intexcor", "id": 142020129, "login": "intexcor", "node_id": "U_kgDOCHcOIQ", "organizations_url": "https://api.github.com/users/intexcor/orgs", "received_events_url": "https://api.github.com/users/intexcor/received_events", "repos_url": "https://api.github.com/users/intexcor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/intexcor/subscriptions", "type": "User", "url": "https://api.github.com/users/intexcor", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "is there a work around im stuck", "never mind just downgraded" ]
1,754,651,336,000
1,755,707,373,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Feature request Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset. ### Motivation I use a service for training models in which ffmpeg cannot be installed. ### Your contribution I use a service for training models in which ffmpeg cannot be installed.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7731/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7730/comments
https://api.github.com/repos/huggingface/datasets/issues/7730/events
https://github.com/huggingface/datasets/pull/7730
3,301,907,242
PR_kwDODunzps6iqTZI
7,730
Grammar fix: correct "showed" to "shown" in fingerprint.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4", "events_url": "https://api.github.com/users/brchristian/events{/privacy}", "followers_url": "https://api.github.com/users/brchristian/followers", "following_url": "https://api.github.com/users/brchristian/following{/other_user}", "gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brchristian", "id": 2460418, "login": "brchristian", "node_id": "MDQ6VXNlcjI0NjA0MTg=", "organizations_url": "https://api.github.com/users/brchristian/orgs", "received_events_url": "https://api.github.com/users/brchristian/received_events", "repos_url": "https://api.github.com/users/brchristian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brchristian/subscriptions", "type": "User", "url": "https://api.github.com/users/brchristian", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
1,754,601,776,000
1,755,110,070,000
null
CONTRIBUTOR
null
null
null
null
This PR corrects a small grammatical issue in the outputs of fingerprint.py: ```diff - "This warning is only showed once. Subsequent hashing failures won't be showed." + "This warning is only shown once. Subsequent hashing failures won't be shown." ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7730/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7730/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7730.diff", "html_url": "https://github.com/huggingface/datasets/pull/7730", "merged_at": "2025-08-13T13:12:56", "patch_url": "https://github.com/huggingface/datasets/pull/7730.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7730" }
true
https://api.github.com/repos/huggingface/datasets/issues/7729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7729/comments
https://api.github.com/repos/huggingface/datasets/issues/7729/events
https://github.com/huggingface/datasets/issues/7729
3,300,672,954
I_kwDODunzps7EvEW6
7,729
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4", "events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}", "followers_url": "https://api.github.com/users/SaleemMalikAI/followers", "following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}", "gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SaleemMalikAI", "id": 115183904, "login": "SaleemMalikAI", "node_id": "U_kgDOBt2RIA", "organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs", "received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events", "repos_url": "https://api.github.com/users/SaleemMalikAI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions", "type": "User", "url": "https://api.github.com/users/SaleemMalikAI", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,754,575,643,000
1,754,575,643,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
> Hi is there any solution for that eror i try to install this one pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html this is working fine but tell me how to install pytorch version that is fit for gpu
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7729/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7728/comments
https://api.github.com/repos/huggingface/datasets/issues/7728/events
https://github.com/huggingface/datasets/issues/7728
3,298,854,904
I_kwDODunzps7EoIf4
7,728
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
{ "avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4", "events_url": "https://api.github.com/users/efsotr/events{/privacy}", "followers_url": "https://api.github.com/users/efsotr/followers", "following_url": "https://api.github.com/users/efsotr/following{/other_user}", "gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/efsotr", "id": 104755879, "login": "efsotr", "node_id": "U_kgDOBj5ypw", "organizations_url": "https://api.github.com/users/efsotr/orgs", "received_events_url": "https://api.github.com/users/efsotr/received_events", "repos_url": "https://api.github.com/users/efsotr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/efsotr/subscriptions", "type": "User", "url": "https://api.github.com/users/efsotr", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,754,539,490,000
1,754,551,907,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug When loading dataset, the info specified by `data_files` did not overwrite the original info. ### Steps to reproduce the bug ```python from datasets import load_dataset traindata = load_dataset( "allenai/c4", "en", data_files={"train": "en/c4-train.00000-of-01024.json.gz", "validation": "en/c4-validation.00000-of-00008.json.gz"}, ) ``` ```log NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}] ``` ```python from datasets import load_dataset traindata = load_dataset( "allenai/c4", "en", data_files={"train": "en/c4-train.00000-of-01024.json.gz"}, split="train" ) ``` ```log ExpectedMoreSplitsError: {'validation'} ``` ### Expected behavior No error ### Environment info datasets 4.0.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7728/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7727/comments
https://api.github.com/repos/huggingface/datasets/issues/7727/events
https://github.com/huggingface/datasets/issues/7727
3,295,718,578
I_kwDODunzps7EcKyy
7,727
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
{ "avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4", "events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}", "followers_url": "https://api.github.com/users/doctorpangloss/followers", "following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}", "gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/doctorpangloss", "id": 2229300, "login": "doctorpangloss", "node_id": "MDQ6VXNlcjIyMjkzMDA=", "organizations_url": "https://api.github.com/users/doctorpangloss/orgs", "received_events_url": "https://api.github.com/users/doctorpangloss/received_events", "repos_url": "https://api.github.com/users/doctorpangloss/repos", "site_admin": false, "starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions", "type": "User", "url": "https://api.github.com/users/doctorpangloss", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,754,468,497,000
1,754,468,497,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug ``` - config_name: some_config data_files: - split: train path: - images/xyz/*.jpg ``` will correctly download but ``` - config_name: some_config data_files: - split: train path: - ./images/xyz/*.jpg ``` will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine. ### Steps to reproduce the bug 1. create a README.md with the front matter of the form ``` - config_name: some_config data_files: - split: train path: - ./images/xyz/*.jpg ``` 2. `touch ./images/xyz/1.jpg` 3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly. 4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")` ### Expected behavior `./` prefix should be interpreted correctly ### Environment info datasets 4.0.0 datasets 3.4.0 reproduce
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7727/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7726/comments
https://api.github.com/repos/huggingface/datasets/issues/7726/events
https://github.com/huggingface/datasets/pull/7726
3,293,789,832
PR_kwDODunzps6iO_oF
7,726
fix(webdataset): don't .lower() field_name
{ "avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4", "events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}", "followers_url": "https://api.github.com/users/YassineYousfi/followers", "following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}", "gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YassineYousfi", "id": 29985433, "login": "YassineYousfi", "node_id": "MDQ6VXNlcjI5OTg1NDMz", "organizations_url": "https://api.github.com/users/YassineYousfi/orgs", "received_events_url": "https://api.github.com/users/YassineYousfi/received_events", "repos_url": "https://api.github.com/users/YassineYousfi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions", "type": "User", "url": "https://api.github.com/users/YassineYousfi", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "fixes: https://github.com/huggingface/datasets/issues/7732", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7726). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "CI failures are unrelated, merging :)" ]
1,754,413,029,000
1,755,707,755,000
null
CONTRIBUTOR
null
null
null
null
This fixes cases where keys have upper case identifiers
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7726/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7726/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7726.diff", "html_url": "https://github.com/huggingface/datasets/pull/7726", "merged_at": "2025-08-20T16:35:55", "patch_url": "https://github.com/huggingface/datasets/pull/7726.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7726" }
true
https://api.github.com/repos/huggingface/datasets/issues/7724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7724/comments
https://api.github.com/repos/huggingface/datasets/issues/7724/events
https://github.com/huggingface/datasets/issues/7724
3,292,315,241
I_kwDODunzps7EPL5p
7,724
Can not stepinto load_dataset.py?
{ "avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4", "events_url": "https://api.github.com/users/micklexqg/events{/privacy}", "followers_url": "https://api.github.com/users/micklexqg/followers", "following_url": "https://api.github.com/users/micklexqg/following{/other_user}", "gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/micklexqg", "id": 13776012, "login": "micklexqg", "node_id": "MDQ6VXNlcjEzNzc2MDEy", "organizations_url": "https://api.github.com/users/micklexqg/orgs", "received_events_url": "https://api.github.com/users/micklexqg/received_events", "repos_url": "https://api.github.com/users/micklexqg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions", "type": "User", "url": "https://api.github.com/users/micklexqg", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,754,386,131,000
1,754,386,131,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ? <!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7724/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7723/comments
https://api.github.com/repos/huggingface/datasets/issues/7723/events
https://github.com/huggingface/datasets/issues/7723
3,289,943,261
I_kwDODunzps7EGIzd
7,723
Don't remove `trust_remote_code` arg!!!
{ "avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4", "events_url": "https://api.github.com/users/autosquid/events{/privacy}", "followers_url": "https://api.github.com/users/autosquid/followers", "following_url": "https://api.github.com/users/autosquid/following{/other_user}", "gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/autosquid", "id": 758925, "login": "autosquid", "node_id": "MDQ6VXNlcjc1ODkyNQ==", "organizations_url": "https://api.github.com/users/autosquid/orgs", "received_events_url": "https://api.github.com/users/autosquid/received_events", "repos_url": "https://api.github.com/users/autosquid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/autosquid/subscriptions", "type": "User", "url": "https://api.github.com/users/autosquid", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
1,754,322,127,000
1,754,322,127,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Feature request defaulting it to False is nice balance. we need manully setting it to True in certain scenarios! Add `trust_remote_code` arg back please! ### Motivation defaulting it to False is nice balance. we need manully setting it to True in certain scenarios! ### Your contribution defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7723/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7722/comments
https://api.github.com/repos/huggingface/datasets/issues/7722/events
https://github.com/huggingface/datasets/issues/7722
3,289,741,064
I_kwDODunzps7EFXcI
7,722
Out of memory even though using load_dataset(..., streaming=True)
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
1,754,318,515,000
1,754,318,515,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom. ### Steps to reproduce the bug ``` ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True) for i,sample in enumerate(tqdm(ds)): target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav') try: sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate']) except Exception as e: print(f"Could not write audio {i} in ds: {e}") ``` ### Expected behavior I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same. ### Environment info Python 3.12.11 Ubuntu 24 datasets 4.0.0 and 3.6.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7722/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7721/comments
https://api.github.com/repos/huggingface/datasets/issues/7721/events
https://github.com/huggingface/datasets/issues/7721
3,289,426,104
I_kwDODunzps7EEKi4
7,721
Bad split error message when using percentages
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "I'd like to work on this: add clearer validation/messages for percent-based splits + tests", "The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n raise ValueError(f'Unknown split \"{split}\". Should be one of {list(name2len)}.')\nValueError: Unknown split \"train\". Should be one of ['test.clean', 'test.other', 'train.clean.100', 'train.clean.360', 'train.other.500', 'validation.clean', 'validation.other'].\n```\n" ]
1,754,313,625,000
1,755,182,544,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps. When doing so, the library returns this error: raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}") ValueError: Bad split: train[0%:10%]. Available splits: ['train'] Edit: Same happens with a split like _train[:90000]_ ### Steps to reproduce the bug ``` for split in range(10): split_str = f"train[{split*10}%:{(split+1)*10}%]" print(f"Processing split {split_str}...") ds = load_dataset("user/dataset", split=split_str, streaming=True) ``` ### Expected behavior I'd expect the library to split my dataset in 10% steps. ### Environment info python 3.12.11 ubuntu 24 dataset 4.0.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7721/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7720/comments
https://api.github.com/repos/huggingface/datasets/issues/7720/events
https://github.com/huggingface/datasets/issues/7720
3,287,150,513
I_kwDODunzps7D7e-x
7,720
Datasets 4.0 map function causing column not found
{ "avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4", "events_url": "https://api.github.com/users/Darejkal/events{/privacy}", "followers_url": "https://api.github.com/users/Darejkal/followers", "following_url": "https://api.github.com/users/Darejkal/following{/other_user}", "gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Darejkal", "id": 55143337, "login": "Darejkal", "node_id": "MDQ6VXNlcjU1MTQzMzM3", "organizations_url": "https://api.github.com/users/Darejkal/orgs", "received_events_url": "https://api.github.com/users/Darejkal/received_events", "repos_url": "https://api.github.com/users/Darejkal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions", "type": "User", "url": "https://api.github.com/users/Darejkal", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The maintainers might want to consider closing this issue.", "Hi, have you tried on a large dataset (200GB+) perhaps? I will try my best to do a rerun with main branch when I have the time.", "I ran it on a small dataset, maybe that’s why I didn’t hit the issue. If it still shows up on your side with the latest main, let me know. I can try it on a bigger set too." ]
1,754,225,554,000
1,754,594,614,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug Column returned after mapping is not found in new instance of the dataset. ### Steps to reproduce the bug Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration` ``` def compute_duration(x): return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]} def get_total_audio_length(dataset): data = dataset.map(compute_duration, num_proc=NUM_PROC) print(data) durations=data["duration"] total_seconds = sum(durations) return total_seconds ``` ### Expected behavior New datasets.Dataset instance should have new columns attached. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.33.2 - PyArrow version: 20.0.0 - Pandas version: 2.3.0 - `fsspec` version: 2023.12.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7720/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7719/comments
https://api.github.com/repos/huggingface/datasets/issues/7719/events
https://github.com/huggingface/datasets/issues/7719
3,285,928,491
I_kwDODunzps7D20or
7,719
Specify dataset columns types in typehint
{ "avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4", "events_url": "https://api.github.com/users/Samoed/events{/privacy}", "followers_url": "https://api.github.com/users/Samoed/followers", "following_url": "https://api.github.com/users/Samoed/following{/other_user}", "gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Samoed", "id": 36135455, "login": "Samoed", "node_id": "MDQ6VXNlcjM2MTM1NDU1", "organizations_url": "https://api.github.com/users/Samoed/orgs", "received_events_url": "https://api.github.com/users/Samoed/received_events", "repos_url": "https://api.github.com/users/Samoed/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Samoed/subscriptions", "type": "User", "url": "https://api.github.com/users/Samoed", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
1,754,140,951,000
1,754,140,951,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Feature request Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131 ### Motivation In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder ```python from typing import TypedDict from torch.utils.data import DataLoader class CorpusInput(TypedDict): title: list[str] body: list[str] class QueryInput(TypedDict): query: list[str] instruction: list[str] def queries_loader() -> DataLoader[QueryInput]: ... def corpus_loader() -> DataLoader[CorpusInput]: ... ``` But for datasets we can only specify columns in type in comments ```python from datasets import Dataset QueryDataset = Dataset """Query dataset should have `query` and `instructions` columns as `str` """ ``` ### Your contribution I can create draft implementation
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7719/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7718/comments
https://api.github.com/repos/huggingface/datasets/issues/7718/events
https://github.com/huggingface/datasets/pull/7718
3,284,221,177
PR_kwDODunzps6hvJ6R
7,718
add support for pyarrow string view in features
{ "avatar_url": "https://avatars.githubusercontent.com/u/5051569?v=4", "events_url": "https://api.github.com/users/onursatici/events{/privacy}", "followers_url": "https://api.github.com/users/onursatici/followers", "following_url": "https://api.github.com/users/onursatici/following{/other_user}", "gists_url": "https://api.github.com/users/onursatici/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/onursatici", "id": 5051569, "login": "onursatici", "node_id": "MDQ6VXNlcjUwNTE1Njk=", "organizations_url": "https://api.github.com/users/onursatici/orgs", "received_events_url": "https://api.github.com/users/onursatici/received_events", "repos_url": "https://api.github.com/users/onursatici/repos", "site_admin": false, "starred_url": "https://api.github.com/users/onursatici/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/onursatici/subscriptions", "type": "User", "url": "https://api.github.com/users/onursatici", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@lhoestq who do you think would be the best to have a look at this? Any pointers would be appreciated, thanks!", "Hi ! what's the rationale for supporting string view ? I'm afraid it can complexify the typing logic without much value" ]
1,754,060,319,000
1,756,984,157,000
null
NONE
null
null
null
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7718/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7718/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7718.diff", "html_url": "https://github.com/huggingface/datasets/pull/7718", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7718.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7718" }
true
https://api.github.com/repos/huggingface/datasets/issues/7717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7717/comments
https://api.github.com/repos/huggingface/datasets/issues/7717/events
https://github.com/huggingface/datasets/issues/7717
3,282,855,127
I_kwDODunzps7DrGTX
7,717
Cached dataset is not used when explicitly passing the cache_dir parameter
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi, I've investigated this issue and can confirm the bug. Here are my findings:\n\n**1. Reproduction:**\nI was able to reproduce the issue on the latest `main` branch. Using the provided code snippet, `snapshot_download` correctly populates the custom `cache_dir`, but `load_dataset` with the same `cache_dir` triggers a full re-download and re-processing of the dataset, ignoring the existing cache.\n\n**2. Investigation:**\nI traced the `cache_dir` parameter from `load_dataset` down to the `DatasetBuilder` class in `src/datasets/builder.py`. The root cause seems to be a mismatch between the cache path structure created by `snapshot_download` and the path structure expected by the `DatasetBuilder`.\n\nSpecifically, the `_relative_data_dir` method in `DatasetBuilder` constructs a path using `namespace___dataset_name` (with three underscores), while the cache from `snapshot_download` appears to use a `repo_id` based format like `datasets--namespace--dataset_name` (with double hyphens).\n\n**3. Attempted Fix & Result:**\nI attempted a fix by modifying the `_relative_data_dir` method to replace the path separator \"/\" in `self.repo_id` with \"--\", to align it with the `snapshot_download` structure.\n\nThis partially worked: `load_dataset` no longer re-downloads the files. However, it still re-processes them every time (triggering \"Generating train split...\", etc.) instead of loading the already processed Arrow files from the cache.\n\nThis suggests the issue is deeper than just the directory name and might be related to how the builder verifies the integrity or presence of the processed cache files.\n\nI hope these findings are helpful for whoever picks up this issue." ]
1,754,032,361,000
1,754,421,576,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter. ### Steps to reproduce the bug ``` from datasets import load_dataset, concatenate_datasets from huggingface_hub import snapshot_download def download_ds(name: str): snapshot_download(repo_id=name, repo_type="dataset", cache_dir="G:/Datasets/cache") def prepare_ds(): audio_ds = load_dataset("openslr/librispeech_asr", num_proc=4, cache_dir="G:/Datasets/cache") print(sfw_ds.features) if __name__ == '__main__': download_ds("openslr/librispeech_asr") prepare_ds() ``` ### Expected behavior I'd expect that the cached version of the dataset is used. Instead, the same dataset is downloaded again to the default cache directory. ### Environment info Windows 11 datasets==4.0.0 Python 3.12.11
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7717/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7717/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7716/comments
https://api.github.com/repos/huggingface/datasets/issues/7716/events
https://github.com/huggingface/datasets/pull/7716
3,281,204,362
PR_kwDODunzps6hk4Mq
7,716
typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7716). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,753,982,085,000
1,753,982,235,000
null
MEMBER
null
null
null
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7716/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7716.diff", "html_url": "https://github.com/huggingface/datasets/pull/7716", "merged_at": "2025-07-31T17:14:51", "patch_url": "https://github.com/huggingface/datasets/pull/7716.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7716" }
true
https://api.github.com/repos/huggingface/datasets/issues/7715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7715/comments
https://api.github.com/repos/huggingface/datasets/issues/7715/events
https://github.com/huggingface/datasets/pull/7715
3,281,189,955
PR_kwDODunzps6hk1CK
7,715
Docs: Use Image(mode="F") for PNG/JPEG depth maps
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7715). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,753,981,789,000
1,753,981,943,000
null
MEMBER
null
null
null
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7715/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7715.diff", "html_url": "https://github.com/huggingface/datasets/pull/7715", "merged_at": "2025-07-31T17:10:10", "patch_url": "https://github.com/huggingface/datasets/pull/7715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7715" }
true
https://api.github.com/repos/huggingface/datasets/issues/7714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7714/comments
https://api.github.com/repos/huggingface/datasets/issues/7714/events
https://github.com/huggingface/datasets/pull/7714
3,281,090,499
PR_kwDODunzps6hkfHj
7,714
fix num_proc=1 ci test
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7714). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,753,979,792,000
1,753,979,943,000
null
MEMBER
null
null
null
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7714/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7714/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7714.diff", "html_url": "https://github.com/huggingface/datasets/pull/7714", "merged_at": "2025-07-31T16:38:03", "patch_url": "https://github.com/huggingface/datasets/pull/7714.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7714" }
true
https://api.github.com/repos/huggingface/datasets/issues/7713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7713/comments
https://api.github.com/repos/huggingface/datasets/issues/7713/events
https://github.com/huggingface/datasets/pull/7713
3,280,813,699
PR_kwDODunzps6hjik2
7,713
Update cli.mdx to refer to the new "hf" CLI
{ "avatar_url": "https://avatars.githubusercontent.com/u/1936278?v=4", "events_url": "https://api.github.com/users/evalstate/events{/privacy}", "followers_url": "https://api.github.com/users/evalstate/followers", "following_url": "https://api.github.com/users/evalstate/following{/other_user}", "gists_url": "https://api.github.com/users/evalstate/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/evalstate", "id": 1936278, "login": "evalstate", "node_id": "MDQ6VXNlcjE5MzYyNzg=", "organizations_url": "https://api.github.com/users/evalstate/orgs", "received_events_url": "https://api.github.com/users/evalstate/received_events", "repos_url": "https://api.github.com/users/evalstate/repos", "site_admin": false, "starred_url": "https://api.github.com/users/evalstate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/evalstate/subscriptions", "type": "User", "url": "https://api.github.com/users/evalstate", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7713). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,753,974,371,000
1,753,979,876,000
null
CONTRIBUTOR
null
null
null
null
Update to refer to `hf auth login`
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7713/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7713/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7713.diff", "html_url": "https://github.com/huggingface/datasets/pull/7713", "merged_at": "2025-07-31T16:37:55", "patch_url": "https://github.com/huggingface/datasets/pull/7713.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7713" }
true
https://api.github.com/repos/huggingface/datasets/issues/7712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7712/comments
https://api.github.com/repos/huggingface/datasets/issues/7712/events
https://github.com/huggingface/datasets/pull/7712
3,280,706,762
PR_kwDODunzps6hjLF5
7,712
Retry intermediate commits too
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7712). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,753,972,413,000
1,753,972,663,000
null
MEMBER
null
null
null
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7712/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7712/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7712.diff", "html_url": "https://github.com/huggingface/datasets/pull/7712", "merged_at": "2025-07-31T14:36:43", "patch_url": "https://github.com/huggingface/datasets/pull/7712.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7712" }
true
https://api.github.com/repos/huggingface/datasets/issues/7711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7711/comments
https://api.github.com/repos/huggingface/datasets/issues/7711/events
https://github.com/huggingface/datasets/pull/7711
3,280,471,353
PR_kwDODunzps6hiXm0
7,711
Update dataset_dict push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7711). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,753,968,303,000
1,753,971,535,000
null
MEMBER
null
null
null
null
following https://github.com/huggingface/datasets/pull/7708
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7711/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7711/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7711.diff", "html_url": "https://github.com/huggingface/datasets/pull/7711", "merged_at": "2025-07-31T14:18:53", "patch_url": "https://github.com/huggingface/datasets/pull/7711.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7711" }
true
https://api.github.com/repos/huggingface/datasets/issues/7710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7710/comments
https://api.github.com/repos/huggingface/datasets/issues/7710/events
https://github.com/huggingface/datasets/pull/7710
3,279,878,230
PR_kwDODunzps6hgXxW
7,710
Concurrent IterableDataset push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7710). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,753,956,691,000
1,753,956,840,000
null
MEMBER
null
null
null
null
Same as https://github.com/huggingface/datasets/pull/7708 but for `IterableDataset`
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7710/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7710/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7710.diff", "html_url": "https://github.com/huggingface/datasets/pull/7710", "merged_at": "2025-07-31T10:12:52", "patch_url": "https://github.com/huggingface/datasets/pull/7710.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7710" }
true
https://api.github.com/repos/huggingface/datasets/issues/7709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7709/comments
https://api.github.com/repos/huggingface/datasets/issues/7709/events
https://github.com/huggingface/datasets/issues/7709
3,276,677,990
I_kwDODunzps7DTiNm
7,709
Release 4.0.0 breaks usage patterns of with_format
{ "avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4", "events_url": "https://api.github.com/users/wittenator/events{/privacy}", "followers_url": "https://api.github.com/users/wittenator/followers", "following_url": "https://api.github.com/users/wittenator/following{/other_user}", "gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wittenator", "id": 9154515, "login": "wittenator", "node_id": "MDQ6VXNlcjkxNTQ1MTU=", "organizations_url": "https://api.github.com/users/wittenator/orgs", "received_events_url": "https://api.github.com/users/wittenator/received_events", "repos_url": "https://api.github.com/users/wittenator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wittenator/subscriptions", "type": "User", "url": "https://api.github.com/users/wittenator", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"numpy\")\nprint(dataset[\"star\"][:].ndim)\n```", "Ah perfect, thanks for clearing this up. I would close this ticket then." ]
1,753,875,293,000
1,754,555,238,000
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
### Describe the bug Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet. ### Steps to reproduce the bug Steps to reproduce: ``` from datasets import load_dataset dataset = load_dataset("lhoestq/demo1") dataset = dataset.with_format("numpy") print(dataset["star"].ndim) ``` ### Expected behavior Working on whole columns should be possible. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36 - Python version: 3.12.11 - `huggingface_hub` version: 0.34.3 - PyArrow version: 21.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7709/timeline
null
null
null
null
false
End of preview.