Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError
Exception: RuntimeError
Message: Dataset scripts are no longer supported, but found test.py
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 989, in dataset_module_factory
raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
RuntimeError: Dataset scripts are no longer supported, but found test.pyNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
DLSCA Test Dataset
A Hugging Face dataset for Deep Learning Side Channel Analysis (DLSCA) with streaming support for large trace files using zarr format.
Features
- Streaming Support: Large trace data is converted to zarr format with chunking for efficient streaming access
- Caching: Uses Hugging Face cache instead of fsspec cache for better integration
- Zip Compression: Zarr chunks are stored in zip files to minimize file count
- Memory Efficient: Only loads required chunks, not the entire dataset
Dataset Structure
- Labels: 1000 examples with 4 labels each (int32)
- Traces: 1000 examples with 20,971 features each (int8)
- Index: Sequential index for each example
Usage
Local Development
from test import TestDataset
# Load dataset locally
dataset = TestDataset()
dataset.download_and_prepare()
dataset_dict = dataset.as_dataset(split="train")
# Access examples
example = dataset_dict[0]
print(f"Labels: {example['labels']}")
print(f"Traces length: {len(example['traces'])}")
Streaming Usage (for large datasets)
from test import TestDownloadManager, TestDataset
# Initialize streaming dataset
dl_manager = TestDownloadManager()
traces_path = "data/traces.npy"
zarr_zip_path = dl_manager.download_zarr_chunks(traces_path, chunk_size=100)
# Access zarr data efficiently
dataset = TestDataset()
zarr_array = dataset._load_zarr_from_zip(zarr_zip_path)
# Access specific chunks
chunk_data = zarr_array[0:100] # First chunk
Chunk Selection
# Select specific ranges for training
selected_range = slice(200, 300)
selected_traces = zarr_array[selected_range]
selected_labels = labels[selected_range]
Implementation Details
Custom DownloadManager
The TestDownloadManager extends datasets.DownloadManager to:
- Convert numpy arrays to zarr format with chunking
- Store zarr data in zip files for compression
- Use Hugging Face cache directory
- Support streaming access patterns
Custom Dataset Builder
The TestDataset extends datasets.GeneratorBasedBuilder to:
- Handle both local numpy files and remote zarr chunks
- Provide efficient chunk-based data access
- Maintain compatibility with Hugging Face datasets API
Zarr Configuration
- Format: Zarr v2 (for better fsspec compatibility)
- Chunks: (100, 20971) - 100 examples per chunk
- Compression: ZIP format for the zarr store
- Storage: Hugging Face cache directory
Performance
The zarr-based approach provides:
- Memory efficiency: Only loads required chunks
- Streaming capability: Can work with datasets larger than RAM
- Compression: Zip storage reduces file size
- Cache optimization: Leverages Hugging Face caching mechanism
Requirements
datasets
zarr<3
fsspec
numpy
zipfile36
File Structure
test/
βββ data/
β βββ labels.npy # Label data (small, kept as numpy)
β βββ traces.npy # Trace data (large, converted to zarr)
βββ test.py # Main dataset implementation
βββ example_usage.py # Usage examples
βββ requirements.txt # Dependencies
βββ README.md # This file
Notes
- The original
traces.npyis ~20MB, which demonstrates the zarr chunking approach - For even larger datasets (GB/TB), this approach scales well
- The zarr v2 format is used for better compatibility with fsspec
- Chunk size can be adjusted based on memory constraints and access patterns
Future Enhancements
- Support for multiple splits (train/test/validation)
- Dynamic chunk size based on available memory
- Compression algorithms for zarr chunks
- Metadata caching for faster dataset initialization
- Downloads last month
- 12