Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'arguments' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 642, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 457, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'arguments' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1847, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 661, in finalize
self._build_writer(self.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 457, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'arguments' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id
string | prompt
string | mcp_config
dict | setup_tool
dict | evaluate_tool
dict | agent_config
dict | allowed_tools
list | max_steps
int64 |
|---|---|---|---|---|---|---|---|
seed_2
|
OBSERVATION:
{"alphabet": ["A", "B", "C"], "outputs": ["x", "y", "z"], "budget": 40, "goal": "basic", "n_states": 3, "target_len": 8, "trap": false}
Episode notes:
- alphabet: allowed input symbols ('A','B','C').
- outputs: each act(symbol) produces exactly one output in {'x','y','z'}.
- budget: total act() calls available this episode.
- n_states: total number of states (indices 0..n-1). In explanations we refer to them as s0, s1, ..., s{n-1}. Start is s0 (index 0).
- trap: whether trap transitions may exist; if trap_hit becomes true, success is impossible.
This is a single-trajectory, stateful, non-resetting episode: each act(symbol) advances the live machine and produces an output. Both output and next state depend on (current_state, symbol). There is no undo or rewind machine is always live. submit_table(...) does not reset state.
Task: Use act() to gather enough evidence (observing the outputs), then call submit_table(table_json) with a complete table for all states s0..s{n-1} (indices 0..n-1 in JSON) and symbols A,B,C.
If your table is incorrect, submit_table may return a counterexample (a short input sequence from the start state with the true outputs); you may use it to adjust your hypothesis. This consumes 1 query and the episode continues.
Use only act() and submit_table(...). Always terminate by calling submit_table(...).
Tool call schemas (short):
- act(symbol: 'A'|'B'|'C') -> returns JSON {output:'x'|'y'|'z', budget_left:int, t:int, trap_hit:bool, queries_used:int}.
- submit_table(table_json: string-of-JSON). The table_json MUST be an object with keys 'n', 'start', 'trans',
and each symbol entry is an array [next_state:int, output:'x'|'y'|'z'] (do NOT use objects like {output:..., next_state:...}).
Minimal example: {"n":2, "start":0, "trans":{"0":{"A":[1,"x"],"B":[1,"z"],"C":[1,"z"]},"1":{"A":[0,"z"],"B":[1,"z"],"C":[1,"z"]}}}.
|
{
"hud": {
"headers": {
"Authorization": "Bearer ${HUD_API_KEY}",
"Mcp-Image": "docker.io/vedanshsharma123/dedeucebench_hud@sha256:4cd3bc40218f472d7ad945bd7a4d4aa38e3e0a77a775e92b5ac225d10042c3ee",
"Run-Id": "${RUN_ID}"
},
"url": "https://mcp.hud.so/v3/mcp"
}
}
|
{
"arguments": {
"budget": 40,
"feedback": true,
"max_steps": 64,
"mode": "basic",
"n_states": 3,
"seed": 2,
"target_len": 8,
"trap": false,
"variety": false
},
"name": "setup"
}
|
{
"arguments": {},
"name": "evaluate"
}
|
{
"system_prompt": "You are an autonomous tool-using agent interacting with a hidden Mealy machine (finite-state transducer).\nObjective: exactly identify the machine and submit the full transition table via submit_table(table_json).\nReturn ONLY function tool calls; never output natural language. All responses must be valid JSON if any content is emitted.\n\nBenchmark focus: identification-first. Success is achieved only by exact transition-table submission via submit_table(table_json).\n\nEpisode semantics:\n- Stateful episode: the hidden machine's state persists across all tool calls; there are no resets between calls.\n- act(symbol) produces exactly one output (a symbol in {'x','y','z'}) and advances the hidden state; both output and next state are deterministic functions of (current_state, symbol).\n- submit_table(table_json) does not change or reset state; wrong submissions consume 1 query and the episode continues; correct submission ends the episode.\n- Start state is 0; the hidden state updates only when you call act(symbol).\n- Each act() consumes 1 query from the budget (invalid symbols still consume 1 and return an error).\n- submit_table(table_json): if your table is incorrect, it consumes 1 query and does NOT end the episode (when feedback is enabled, a short counterexample is returned). If correct, it ends the episode and does not consume budget. When budget reaches 0, the episode ends with ok=false.\n\nTools (use only act() and submit_table()):\n- act(symbol: 'A'|'B'|'C') -> JSON {output, budget_left, t, trap_hit, queries_used}. Each call consumes 1 query, produces an output, and advances the hidden state.\n- submit_table(...) -> JSON {ok, budget_left, queries_used, trap_hit, counterexample?}. If ok=false, consumes 1 query and does NOT end the episode (counterexample present only when feedback is enabled). If ok=true, ends the episode.\n\nTool return fields (definitions):\n- output: one of {'x','y','z'} produced by the latest act() call.\n- budget_left: remaining number of act() queries.\n- t: 1-based step index since the episode started (increments on each act()).\n- trap_hit: boolean; once true it remains true for the rest of the episode.\n- queries_used: total count of act() calls so far.\n\nCounterexample semantics (when feedback is enabled):\n- If submit_table is incorrect, the environment may return a short distinguishing test starting from the start state (state 0): a sequence of inputs with the corresponding true outputs.\n- This counterexample is diagnostic only. It does NOT change the live episode state, and it is NOT tied to your current state trajectory.\n- You may use it to refine your hypothesis; then continue probing with act() and resubmit.\n\nSubmit-table JSON schema (table_json string must parse to this shape, strictly follow this):\nImportant: Each entry is [next_state:int, output:'x'|'y'|'z'] β do NOT swap to [output, next_state].\n\n{\n \"n\": <int total_states>,\n \"start\": 0,\n \"trans\": {\n \"0\": { \"A\": [<ns:int>, <output:\"x\"|\"y\"|\"z\">], \"B\": [<ns>, <output>], \"C\": [<ns>, <output>] },\n \"1\": { \"A\": [<ns>, <output>], \"B\": [<ns>, <output>], \"C\": [<ns>, <output>] },\n ... up to \"n-1\"\n }\n}\n\nSkeleton example of table_json (Strictly follow this) (for n=2 β adjust values):\n{\"n\":2,\"start\":0,\"trans\":{\"0\":{\"A\":[1,\"y\"],\"B\":[0,\"x\"],\"C\":[0,\"x\"]},\"1\":{\"A\":[0,\"x\"],\"B\":[1,\"y\"],\"C\":[1,\"z\"]}}}\n\nFormatting & compliance:\n- Respond only with function tool calls as per the provided tool schemas.\n- The submit_table argument must be a single JSON string (not an object) matching the schema mentioned. Note that it include n, start and trans parts.\n- Do NOT echo the observation or tool descriptions.\n- Ensure \"trans\" covers every state index 0..n-1 and each of A,B,C exactly once.\n- Always terminate by calling submit_table(...).",
"temperature": 0,
"top_p": 1
}
|
[
"act",
"submit_table"
] | 64
|
seed_2
|
OBSERVATION:
{"alphabet": ["A", "B", "C"], "outputs": ["x", "y", "z"], "budget": 35, "goal": "basic", "n_states": 2, "target_len": 8, "trap": false}
Episode notes:
- alphabet: allowed input symbols ('A','B','C').
- outputs: each act(symbol) produces exactly one output in {'x','y','z'}.
- budget: total act() calls available this episode.
- n_states: total number of states (indices 0..n-1). In explanations we refer to them as s0, s1, ..., s{n-1}. Start is s0 (index 0).
- trap: whether trap transitions may exist; if trap_hit becomes true, success is impossible.
This is a single-trajectory, stateful, non-resetting episode: each act(symbol) advances the live machine and produces an output. Both output and next state depend on (current_state, symbol). There is no undo or rewind machine is always live. submit_table(...) does not reset state.
Task: Use act() to gather enough evidence (observing the outputs), then call submit_table(table_json) with a complete table for all states s0..s{n-1} (indices 0..n-1 in JSON) and symbols A,B,C.
If your table is incorrect, submit_table may return a counterexample (a short input sequence from the start state with the true outputs); you may use it to adjust your hypothesis. This consumes 1 query and the episode continues.
Use only act() and submit_table(...). Always terminate by calling submit_table(...).
Tool call schemas (short):
- act(symbol: 'A'|'B'|'C') -> returns JSON {output:'x'|'y'|'z', budget_left:int, t:int, trap_hit:bool, queries_used:int}.
- submit_table(table_json: string-of-JSON). The table_json MUST be an object with keys 'n', 'start', 'trans',
and each symbol entry is an array [next_state:int, output:'x'|'y'|'z'] (do NOT use objects like {output:..., next_state:...}).
Minimal example: {"n":2, "start":0, "trans":{"0":{"A":[1,"x"],"B":[1,"z"],"C":[1,"z"]},"1":{"A":[0,"z"],"B":[1,"z"],"C":[1,"z"]}}}.
|
{
"hud": {
"headers": {
"Authorization": "Bearer ${HUD_API_KEY}",
"Mcp-Image": "docker.io/vedanshsharma123/dedeucebench_hud@sha256:4cd3bc40218f472d7ad945bd7a4d4aa38e3e0a77a775e92b5ac225d10042c3ee",
"Run-Id": "${RUN_ID}"
},
"url": "https://mcp.hud.so/v3/mcp"
}
}
|
{
"arguments": {
"budget": 55,
"feedback": true,
"max_steps": 64,
"mode": "basic",
"n_states": 2,
"seed": 2,
"target_len": 8,
"trap": false,
"variety": false
},
"name": "setup"
}
|
{
"arguments": {},
"name": "evaluate"
}
|
{
"system_prompt": "You are an autonomous tool-using agent interacting with a hidden Mealy machine (finite-state transducer).\nObjective: exactly identify the machine and submit the full transition table via submit_table(table_json).\nReturn ONLY function tool calls; never output natural language. All responses must be valid JSON if any content is emitted.\n\nBenchmark focus: identification-first. Success is achieved only by exact transition-table submission via submit_table(table_json).\n\nEpisode semantics:\n- Stateful episode: the hidden machine's state persists across all tool calls; there are no resets between calls.\n- act(symbol) produces exactly one output (a symbol in {'x','y','z'}) and advances the hidden state; both output and next state are deterministic functions of (current_state, symbol).\n- submit_table(table_json) does not change or reset state; wrong submissions consume 1 query and the episode continues; correct submission ends the episode.\n- Start state is 0; the hidden state updates only when you call act(symbol).\n- Each act() consumes 1 query from the budget (invalid symbols still consume 1 and return an error).\n- submit_table(table_json): if your table is incorrect, it consumes 1 query and does NOT end the episode (when feedback is enabled, a short counterexample is returned). If correct, it ends the episode and does not consume budget. When budget reaches 0, the episode ends with ok=false.\n\nTools (use only act() and submit_table()):\n- act(symbol: 'A'|'B'|'C') -> JSON {output, budget_left, t, trap_hit, queries_used}. Each call consumes 1 query, produces an output, and advances the hidden state.\n- submit_table(...) -> JSON {ok, budget_left, queries_used, trap_hit, counterexample?}. If ok=false, consumes 1 query and does NOT end the episode (counterexample present only when feedback is enabled). If ok=true, ends the episode.\n\nTool return fields (definitions):\n- output: one of {'x','y','z'} produced by the latest act() call.\n- budget_left: remaining number of act() queries.\n- t: 1-based step index since the episode started (increments on each act()).\n- trap_hit: boolean; once true it remains true for the rest of the episode.\n- queries_used: total count of act() calls so far.\n\nCounterexample semantics (when feedback is enabled):\n- If submit_table is incorrect, the environment may return a short distinguishing test starting from the start state (state 0): a sequence of inputs with the corresponding true outputs.\n- This counterexample is diagnostic only. It does NOT change the live episode state, and it is NOT tied to your current state trajectory.\n- You may use it to refine your hypothesis; then continue probing with act() and resubmit.\n\nSubmit-table JSON schema (table_json string must parse to this shape, strictly follow this):\nImportant: Each entry is [next_state:int, output:'x'|'y'|'z'] β do NOT swap to [output, next_state].\n\n{\n \"n\": <int total_states>,\n \"start\": 0,\n \"trans\": {\n \"0\": { \"A\": [<ns:int>, <output:\"x\"|\"y\"|\"z\">], \"B\": [<ns>, <output>], \"C\": [<ns>, <output>] },\n \"1\": { \"A\": [<ns>, <output>], \"B\": [<ns>, <output>], \"C\": [<ns>, <output>] },\n ... up to \"n-1\"\n }\n}\n\nSkeleton example of table_json (Strictly follow this) (for n=2 β adjust values):\n{\"n\":2,\"start\":0,\"trans\":{\"0\":{\"A\":[1,\"y\"],\"B\":[0,\"x\"],\"C\":[0,\"x\"]},\"1\":{\"A\":[0,\"x\"],\"B\":[1,\"y\"],\"C\":[1,\"z\"]}}}\n\nFormatting & compliance:\n- Respond only with function tool calls as per the provided tool schemas.\n- The submit_table argument must be a single JSON string (not an object) matching the schema mentioned. Note that it include n, start and trans parts.\n- Do NOT echo the observation or tool descriptions.\n- Ensure \"trans\" covers every state index 0..n-1 and each of A,B,C exactly once.\n- Always terminate by calling submit_table(...).",
"temperature": 0,
"top_p": 1
}
|
[
"act",
"submit_table"
] | 64
|
README.md exists but content is empty.
- Downloads last month
- 6