oop / README.md
alphadl's picture
Update dataset card for MultiOOP: A Multi-Language Object-Oriented Programming Benchmark (#2)
b8d87a1 verified
metadata
language:
  - en
license: mit
size_categories:
  - 1K<n<10K
task_categories:
  - text-generation
pretty_name: MultiOOP Benchmark
tags:
  - code
  - dataset
  - object-oriented-programming
  - code-generation
  - benchmark
  - multi-language
  - python
  - php
  - cpp
  - csharp
  - java
  - javascript

MultiOOP: A Multi-Language Object-Oriented Programming Benchmark for Large Language Models

Dataset Description

Dataset Summary

MultiOOP is a multi-language object-oriented programming benchmark designed to establish fair and robust evaluations for intelligent code generation by large language models (LLMs). It addresses major imbalances in existing benchmarks by covering six popular programming languages: Python, PHP, C++, C#, Java, and JavaScript. The benchmark features 267 tasks per language, totaling 1602 unique tasks, and extends an existing single-language OOP benchmark to a multilingual setting. MultiOOP includes an automated framework for augmenting test cases and introduces the pass@o metric to specifically quantify LLMs' understanding of core object-oriented programming concepts. It covers three difficulty levels: Simple-level OOP, Moderate-level OOP, and Difficult-level OOP.

Supported Tasks and Leaderboards

The dataset supports tasks related to object-oriented code generation and evaluation for Large Language Models (LLMs). It is designed to assess LLMs' ability to understand and generate code that encapsulates core OOP concepts across multiple programming languages. Evaluation is typically performed using metrics like pass@k and the specialized pass@o for object-oriented understanding.

Languages

The MultiOOP benchmark problems are available in six popular programming languages:

  • Python
  • PHP
  • C++
  • C#
  • Java
  • JavaScript

The natural language descriptions for the tasks, including comments and docstrings, are in English.

Dataset Structure

from datasets import load_dataset
load_dataset("oop")

DatasetDict({
    test: Dataset({
        features: ['task_id', 'question', 'canonical_solution', 'test_list', 'test_function', 'entry_point', 'test_matching', 'test_match_function'],
        num_rows: 1602 # 267 tasks * 6 languages
    })
})

Data Instances

Example for MultiOOP benchmark (Python)

{
    'task_id': 'OOP/0',
    'question': 'First, write a **WDS** class using the Python language. Then, within the WDS class, create a public function called **without_duplicates** to implement finding the length of the longest substring in a given string **s** that does not contain any duplicate characters.',
    'test_function': 'def test_run(content1):\
    return WDS().without_duplicates(content1)',
    'test_list': [
        'assert candidate("abcabcbb")==3',
        'assert candidate("bbbbb")==1',
        'assert candidate("pwwkew")==3'],
    'entry_point': 'test_run',
    'test_matching': 'assert candidate([["class WDS", "def without_duplicates"]]) == True',
    'test_match_function': 'def matching_function(content):\
    def run_match(text):\
        for task in text:\
            if task not in str_content:\
                return False\
        return True\
    len_cont = len(content)\
    if len_cont==1 and run_match(content[0]) == True:\
        return True\
    elif (len_cont==2 and run_match(content[0]) == True) or (len_cont==2 and run_match(content[1]) == True):\
        return True\
    else:\
        return False'
}

Data Fields

  • task_id: Identifier for the data sample (e.g., 'Python/OOP/0', 'Java/OOP/0').
  • question: Natural language description of the programming task.
  • canonical_solution: The ground truth solution to the programming task.
  • test_function: The function used to run the tests against the generated code.
  • test_list: A list of assertions or test cases to verify the functional correctness of the solution.
  • entry_point: The entry point function for test execution.
  • test_matching: Tests designed to verify adherence to core OOP concepts (e.g., correct class and method definitions).
  • test_match_function: The function used to perform conceptual matching tests.

Data Splits

The MultiOOP dataset consists of a test split containing 1602 samples in total, comprising 267 distinct tasks for each of the six supported programming languages.

Dataset Creation

For detailed information on the dataset's creation methodology, task design, the translator used to extend the single-language benchmark, and the definition of the pass@o metric, please refer to the original paper: A Multi-Language Object-Oriented Programming Benchmark for Large Language Models.

Citation Information

@article{wang2024multioop,
  title={A Multi-Language Object-Oriented Programming Benchmark for Large Language Models},
  author={Wang, Shuai and Ding, Liang and Shen, Li and Luo, Yong and Du, Bo and Tao, Dacheng},
  journal={arXiv preprint arXiv:2509.26111},
  year={2024},
  url={https://huggingface.co/papers/2509.26111}
}

Contributions

Thanks to @lvwerra for adding this dataset.