{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# OpenTrack Quickstart\n",
"\n",
"This simplified notebook lets you jump straight into training humanoid motion tracking policies with OpenTrack!\n",
"\n",
"**Everything is already set up:**\n",
"- ā
OpenTrack repository cloned\n",
"- ā
PyTorch and dependencies installed\n",
"- ā
Motion capture datasets downloaded\n",
"- ā
Workspace directories created\n",
"\n",
"Just run the cells and enjoy! š"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"First, let's set up our workspace paths and helper functions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import subprocess\n",
"import time\n",
"from pathlib import Path\n",
"from IPython.display import Video, display, HTML\n",
"\n",
"# Workspace paths (already set up by container initialization)\n",
"WORKSPACE = Path(\"/data/workspaces/opentrack\")\n",
"DATASETS_DIR = WORKSPACE / \"datasets\"\n",
"MODELS_DIR = WORKSPACE / \"models\"\n",
"VIDEOS_DIR = WORKSPACE / \"videos\"\n",
"OPENTRACK_REPO = Path.home() / \"OpenTrack\"\n",
"\n",
"# Change to OpenTrack directory\n",
"os.chdir(OPENTRACK_REPO)\n",
"\n",
"print(\"š Workspace directories:\")\n",
"print(f\" Datasets: {DATASETS_DIR}\")\n",
"print(f\" Models: {MODELS_DIR}\")\n",
"print(f\" Videos: {VIDEOS_DIR}\")\n",
"print(f\"\\nā Working directory: {os.getcwd()}\")\n",
"\n",
"# Check if datasets exist\n",
"mocap_files = list((DATASETS_DIR / \"lafan1\" / \"UnitreeG1\").glob(\"*.npz\"))\n",
"print(f\"\\nā Found {len(mocap_files)} motion capture files\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Helper function to run OpenTrack commands\n",
"def run_opentrack_command(cmd_args, description=\"Running command\"):\n",
" \"\"\"Run an OpenTrack command and display output\"\"\"\n",
" print(f\"\\n{'='*60}\")\n",
" print(f\"š {description}\")\n",
" print(f\" Command: python {' '.join(cmd_args)}\")\n",
" print(f\"{'='*60}\\n\")\n",
" \n",
" result = subprocess.run(\n",
" ['python'] + cmd_args,\n",
" capture_output=False,\n",
" text=True\n",
" )\n",
" \n",
" if result.returncode == 0:\n",
" print(f\"\\nā
{description} completed successfully!\")\n",
" else:\n",
" print(f\"\\nā ļø {description} exited with code {result.returncode}\")\n",
" \n",
" return result.returncode\n",
"\n",
"# Helper to find latest experiment\n",
"def find_latest_experiment(pattern=''):\n",
" \"\"\"Find the most recent experiment folder\"\"\"\n",
" experiments = [d for d in MODELS_DIR.iterdir() if d.is_dir() and pattern in d.name]\n",
" if not experiments:\n",
" return None\n",
" return sorted(experiments, key=lambda x: x.stat().st_mtime, reverse=True)[0].name\n",
"\n",
"print(\"ā Helper functions loaded\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quick Training (Debug Mode)\n",
"\n",
"Let's train a quick policy in debug mode to verify everything works. This takes just a few minutes:\n",
"\n",
"**Parameters:**\n",
"- `--exp_name debug` - Name for this experiment\n",
"- `--terrain_type flat_terrain` - Train on flat ground"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"run_opentrack_command(\n",
" ['train_policy.py', '--exp_name', 'quickstart_debug', '--terrain_type', 'flat_terrain'],\n",
" description=\"Training OpenTrack policy (debug mode)\"\n",
")\n",
"\n",
"# Find the experiment\n",
"exp_folder = find_latest_experiment('quickstart_debug')\n",
"if exp_folder:\n",
" print(f\"\\nš¦ Experiment saved: {exp_folder}\")\n",
" print(f\" Location: {MODELS_DIR / exp_folder}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Convert Checkpoint (Brax ā PyTorch)\n",
"\n",
"OpenTrack trains using Brax (JAX-based), but we need to convert the checkpoint to PyTorch for deployment:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"exp_folder = find_latest_experiment('quickstart_debug')\n",
"\n",
"if exp_folder:\n",
" run_opentrack_command(\n",
" ['brax2torch.py', '--exp_name', exp_folder],\n",
" description=\"Converting Brax checkpoint to PyTorch\"\n",
" )\n",
"else:\n",
" print(\"ā ļø No experiment found. Please run training first.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate Videos\n",
"\n",
"Now let's visualize the policy by generating videos using MuJoCo's headless renderer:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"exp_folder = find_latest_experiment('quickstart_debug')\n",
"\n",
"if exp_folder:\n",
" print(f\"š¬ Generating videos for experiment: {exp_folder}\")\n",
" print(f\" Videos will be saved to: {VIDEOS_DIR}\\n\")\n",
" \n",
" run_opentrack_command(\n",
" ['play_policy.py', '--exp_name', exp_folder, '--use_renderer'],\n",
" description=\"Generating videos with MuJoCo renderer\"\n",
" )\n",
" \n",
" # Give it a moment to finish writing files\n",
" time.sleep(2)\n",
" \n",
" # Find generated videos\n",
" videos = list(VIDEOS_DIR.glob(\"*.mp4\")) + list(VIDEOS_DIR.glob(\"*.gif\"))\n",
" \n",
" if videos:\n",
" print(f\"\\nā
Generated {len(videos)} video(s):\")\n",
" for v in sorted(videos, key=lambda x: x.stat().st_mtime, reverse=True):\n",
" print(f\" - {v.name}\")\n",
" else:\n",
" print(\"\\nā ļø No videos found. They might be in the experiment folder.\")\n",
"else:\n",
" print(\"ā ļø No experiment found. Please run training first.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Display Videos\n",
"\n",
"Let's watch the trained policy in action:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Find all videos\n",
"videos = list(VIDEOS_DIR.glob(\"*.mp4\")) + list(VIDEOS_DIR.glob(\"*.gif\"))\n",
"videos = sorted(videos, key=lambda x: x.stat().st_mtime, reverse=True)\n",
"\n",
"if not videos:\n",
" # Search in experiment folders too\n",
" videos = list(MODELS_DIR.glob(\"**/*.mp4\")) + list(MODELS_DIR.glob(\"**/*.gif\"))\n",
" videos = sorted(videos, key=lambda x: x.stat().st_mtime, reverse=True)\n",
"\n",
"if videos:\n",
" print(f\"š„ Found {len(videos)} video(s). Displaying...\\n\")\n",
" \n",
" for i, video_path in enumerate(videos[:3]): # Show up to 3 most recent\n",
" print(f\"\\n{'='*60}\")\n",
" print(f\"Video {i+1}: {video_path.name}\")\n",
" print(f\"{'='*60}\")\n",
" \n",
" try:\n",
" if video_path.suffix == '.mp4':\n",
" display(Video(str(video_path), width=800, embed=True))\n",
" elif video_path.suffix == '.gif':\n",
" display(HTML(f'
'))\n",
" except Exception as e:\n",
" print(f\"ā ļø Error displaying video: {e}\")\n",
" print(f\" You can access it at: {video_path}\")\n",
"else:\n",
" print(\"ā ļø No videos found.\")\n",
" print(\"\\nMake sure you:\")\n",
" print(\" 1. Trained a policy\")\n",
" print(\" 2. Converted the checkpoint\")\n",
" print(\" 3. Generated videos\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next Steps\n",
"\n",
"### Train on Rough Terrain\n",
"\n",
"Generate terrain and train a more robust policy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate rough terrain\n",
"run_opentrack_command(\n",
" ['generate_terrain.py'],\n",
" description=\"Generating rough terrain\"\n",
")\n",
"\n",
"print(\"\\nā Terrain generated!\")\n",
"print(\" You can now train with: --terrain_type rough_terrain\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Train on rough terrain\n",
"run_opentrack_command(\n",
" ['train_policy.py', '--exp_name', 'rough_terrain', '--terrain_type', 'rough_terrain'],\n",
" description=\"Training on rough terrain\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Full Training (Longer, Better Results)\n",
"\n",
"For production-quality results, remove the debug flag and train for longer:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This will take significantly longer but produce better results\n",
"# run_opentrack_command(\n",
"# ['train_policy.py', '--exp_name', 'full_training', '--terrain_type', 'flat_terrain'],\n",
"# description=\"Full training (this takes a while!)\"\n",
"# )\n",
"\n",
"print(\"Uncomment the code above to run full training (takes 20-60 minutes on GPU)\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Play Reference Motion\n",
"\n",
"Visualize the original mocap data alongside the policy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"exp_folder = find_latest_experiment()\n",
"\n",
"if exp_folder:\n",
" run_opentrack_command(\n",
" ['play_policy.py', '--exp_name', exp_folder, '--use_renderer', '--play_ref_motion'],\n",
" description=\"Generating videos with reference motion comparison\"\n",
" )\n",
"else:\n",
" print(\"ā ļø No experiment found.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summary\n",
"\n",
"**What we did:**\n",
"1. ā
Trained a humanoid motion tracking policy using OpenTrack\n",
"2. ā
Converted the checkpoint from Brax to PyTorch\n",
"3. ā
Generated videos of the policy in action\n",
"4. ā
Visualized the results\n",
"\n",
"**Project Structure:**\n",
"```\n",
"/data/workspaces/opentrack/\n",
"āāā datasets/ # Motion capture data\n",
"ā āāā lafan1/UnitreeG1/*.npz\n",
"āāā models/ # Trained checkpoints\n",
"ā āāā _/\n",
"āāā videos/ # Generated videos\n",
" āāā *.mp4, *.gif\n",
"```\n",
"\n",
"**All data persists** across container restarts, so you can continue training or generate new videos anytime!\n",
"\n",
"For more advanced usage, check out the full `opentrack.ipynb` notebook."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.13.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}