github-code-2025 / README.md
nick007x's picture
Update README.md
b264aea verified
|
raw
history blame
4.71 kB
metadata
license: mit
modalities:
  - Text
formats:
  - parquet
size: 10M - 100M
libraries:
  - Datasets
  - Dask
  - Croissant
  - Polars

πŸš€ GitHub Code 2025: The Clean Code Manifesto

A meticulously curated dataset of 1.5M+ repositories representing both quality and innovation in 2025's code ecosystem

🌟 The Philosophy

Quality Over Quantity, Purpose Over Volume

In an era of data abundance, we present a dataset built on radical curation. Every file, every repository, every byte has been carefully selected to represent the signal in the noise of open-source development.

🎯 What This Dataset Is

πŸ“Š Dual-Perspective Design

Subset πŸŽ–οΈ Above 2 Stars 🌱 Below 2 Stars (2025)
Scope 1M top repositories 1M random 2025 repos
Purpose Proven quality & patterns Emerging trends & innovation
Value What works What's next

🧹 The Clean Code Promise

# What you WON'T find here:
🚫 Binary files          # No images, executables, models
🚫 Build artifacts       # No node_modules, __pycache__
🚫 Configuration noise   # No .git, IDE files, lock files
🚫 License duplication   # No repetitive legal text
🚫 Minified code         # No compressed/obfuscated content
🚫 Empty files           # No whitespace-only content

πŸ“ Dataset Structure

github-code-2025/
β”œβ”€β”€ πŸ“ˆ above-2-stars/
β”‚   β”œβ”€β”€ train_000.parquet
β”‚   β”œβ”€β”€ train_001.parquet
β”‚   └── ...
└── 🌱 below-2-star/
    β”œβ”€β”€ train_000.parquet
    β”œβ”€β”€ train_001.parquet
    └── ...

πŸ“Š Schema

{
    "repo_id": "owner/repo_name",    # πŸ“ Repository identifier
    "file_path": "src/main.py",      # πŸ—‚οΈ Relative file path
    "content": "def clean_code():",   # πŸ’Ž Actual source code
    "size": 1024                     # πŸ“ File size in bytes
}

πŸ› οΈ How to Use

πŸ”₯ Quick Start

from datasets import load_dataset

# Load the quality benchmark
quality_ds = load_dataset("nick007x/github-code-2025", "above-2-stars")

# Load emerging trends
emerging_ds = load_dataset("nick007x/github-code-2025", "below-2-star")

# Mix for balanced training
balanced_ds = interleave_datasets([quality_ds, emerging_ds])

🎯 Ideal Use Cases

  • 🧠 AI Training: Clean, diverse code for language models
  • πŸ“Š Code Analysis: Compare popular vs emerging patterns
  • πŸ” Trend Research: 2025 development practices
  • πŸŽ“ Education: High-quality examples for learning
  • πŸ› οΈ Tool Development: Benchmarking code quality tools

πŸ—οΈ Creation Methodology

🎨 Selection Strategy

Phase Action Purpose
1 🎯 Dual population sampling Balance quality & innovation
2 🧹 Multi-layer filtering Remove noise & binaries
3 πŸ“ Size normalization Focus on meaningful content
4 πŸ” Content validation Ensure text quality
5 🏷️ Metadata preservation Maintain context

🚫 What We Filtered Out

File Types Removed:

  • 50+ binary extensions (images, models, executables)
  • 30+ build/system directories
  • 15+ configuration file types
  • All files outside 1KB-5MB range

Quality Checks:

  • βœ… UTF-8 text validation
  • βœ… Non-empty content check
  • βœ… Binary detection
  • βœ… Repository structure preservation

πŸŽͺ Why This Dataset Matters

πŸ’« The Quality Revolution

We reject the "more data is better" dogma. Instead, we offer:

  • 🎯 Intentional Curation: Every file serves a purpose
  • βš–οΈ Balanced Perspective: Popular + Emerging = Complete picture
  • 🧹 Unprecedented Cleanliness: The cleanest code dataset available
  • πŸ“… Temporal Intelligence: 2025-focused for relevance

🀝 Contributing & Feedback

This dataset is a living project. We welcome:

  • πŸ› Bug reports and issues
  • πŸ’‘ Feature requests for future versions
  • πŸ“Š Validation of data quality
  • 🎯 Suggestions for improvement

πŸ“œ License

This dataset is provided under the MIT License - see the LICENSE file for details.

Important: Repository contents maintain their original licenses. Please respect individual project licenses when using this data.

πŸ™ Acknowledgments

Built with gratitude for the entire open-source community. Every file in this dataset represents hours of dedication from developers worldwide.


⭐ If this dataset helps your research or project, please consider starring the repository!

"In the pursuit of AI that understands code, we must first understand what code is worth learning."