Julian Bilcke
commited on
Commit
Β·
9ebdc51
1
Parent(s):
74cf591
better default config
Browse files- .claude/settings.local.json +14 -0
- CLAUDE.md +297 -0
- Dockerfile +10 -2
- samples/{locomotion.ipynb β locomotion/locomotion.ipynb} +0 -0
- samples/{manipulation.ipynb β manipulation/manipulation.ipynb} +0 -0
- samples/opentrack/init_opentrack.sh +117 -0
- samples/{opentrack.ipynb β opentrack/opentrack.ipynb} +0 -0
- samples/opentrack/opentrack_quickstart.ipynb +381 -0
- samples/opentrack/setup.sh +49 -0
- samples/{tutorial.ipynb β tutorial/tutorial.ipynb} +0 -0
- start_server.sh +63 -1
.claude/settings.local.json
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"permissions": {
|
| 3 |
+
"allow": [
|
| 4 |
+
"Bash(test:*)",
|
| 5 |
+
"Bash(cat:*)",
|
| 6 |
+
"Bash(tree:*)",
|
| 7 |
+
"Bash(HOME=/tmp/test_home bash -c:*)",
|
| 8 |
+
"Bash(chmod:*)",
|
| 9 |
+
"WebSearch"
|
| 10 |
+
],
|
| 11 |
+
"deny": [],
|
| 12 |
+
"ask": []
|
| 13 |
+
}
|
| 14 |
+
}
|
CLAUDE.md
ADDED
|
@@ -0,0 +1,297 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CLAUDE.md
|
| 2 |
+
|
| 3 |
+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
| 4 |
+
|
| 5 |
+
## Repository Overview
|
| 6 |
+
|
| 7 |
+
This is a Hugging Face Space that provides a GPU-accelerated JupyterLab environment for training and simulating robots using the MuJoCo physics engine. The space covers a wide range of robotics applications including locomotion, manipulation, motion tracking, and general physics simulation. It is designed to run in a Docker container with NVIDIA GPU support for hardware-accelerated physics rendering.
|
| 8 |
+
|
| 9 |
+
## What This Environment Supports
|
| 10 |
+
|
| 11 |
+
This is a general-purpose MuJoCo training environment with sample notebooks covering:
|
| 12 |
+
|
| 13 |
+
1. **General MuJoCo Physics** (`tutorial.ipynb`) - Comprehensive introduction to MuJoCo fundamentals including basic rendering, simulation loops, contacts, friction, tendons, actuators, sensors, and advanced rendering techniques
|
| 14 |
+
|
| 15 |
+
2. **Locomotion** (`locomotion.ipynb`) - Training quadrupedal and bipedal robots for walking, running, and acrobatic behaviors. Includes environments for Unitree Go1/G1, Boston Dynamics Spot, Google Barkour, Berkeley Humanoid, Unitree H1, and more
|
| 16 |
+
|
| 17 |
+
3. **Manipulation** (`manipulation.ipynb`) - Robot arm and dexterous hand control. Includes Franka Emika Panda pick-and-place tasks and Leap Hand dexterous manipulation with asymmetric actor-critic training
|
| 18 |
+
|
| 19 |
+
4. **Motion Tracking** (`opentrack.ipynb`) - Humanoid motion tracking and retargeting using the OpenTrack system with motion capture data
|
| 20 |
+
|
| 21 |
+
## Architecture
|
| 22 |
+
|
| 23 |
+
### Container Environment
|
| 24 |
+
- **Base Image**: nvidia/cuda:12.8.1-devel-ubuntu22.04
|
| 25 |
+
- **Python**: 3.13 (Miniconda)
|
| 26 |
+
- **GPU Rendering**: Uses EGL (OpenGL for headless rendering) with NVIDIA drivers
|
| 27 |
+
- **Web Server**: JupyterLab on port 7860
|
| 28 |
+
|
| 29 |
+
### Key Components
|
| 30 |
+
|
| 31 |
+
1. **GPU Initialization** (`init_gpu.py`): Validates GPU setup before starting JupyterLab
|
| 32 |
+
- Checks NVIDIA driver accessibility via `nvidia-smi`
|
| 33 |
+
- Verifies EGL library availability (libEGL.so.1, libGL.so.1, libEGL_nvidia.so.0)
|
| 34 |
+
- Tests EGL device initialization with multiple fallback methods (platform device, default display, surfaceless)
|
| 35 |
+
- Validates MuJoCo rendering at multiple resolutions (64x64, 240x320, 480x640)
|
| 36 |
+
- Critical environment variables: `MUJOCO_GL=egl`, `PYOPENGL_PLATFORM=egl`, `EGL_PLATFORM=surfaceless`
|
| 37 |
+
|
| 38 |
+
2. **MuJoCo Playground Setup** (`init_mujoco.py`): Downloads MuJoCo model assets
|
| 39 |
+
- Imports `mujoco_playground` which automatically clones the mujoco_menagerie repository
|
| 40 |
+
- This repository contains robot models (quadrupeds, bipeds, arms, hands, etc.)
|
| 41 |
+
|
| 42 |
+
3. **Server Startup** (`start_server.sh`): Container entrypoint
|
| 43 |
+
- Sets up NVIDIA EGL library symlinks at runtime (searches /usr/local/nvidia/lib64, /usr/local/cuda/lib64, /usr/lib/nvidia)
|
| 44 |
+
- Runs GPU validation (`python init_gpu.py`)
|
| 45 |
+
- Downloads MuJoCo assets (`python init_mujoco.py`)
|
| 46 |
+
- Disables JupyterLab announcements
|
| 47 |
+
- Launches JupyterLab with iframe embedding support for Hugging Face Spaces
|
| 48 |
+
|
| 49 |
+
### Sample Notebooks
|
| 50 |
+
|
| 51 |
+
Sample notebooks are organized in individual folders within `samples/` and are automatically copied to `/data/workspaces/` at container startup:
|
| 52 |
+
|
| 53 |
+
- **`samples/tutorial/`** - Complete MuJoCo introduction (2258 lines) covering physics fundamentals, rendering, contacts, actuators, sensors, tendons, and camera control
|
| 54 |
+
- **`samples/locomotion/`** - Quadrupedal and bipedal locomotion training (1762 lines) with PPO, domain randomization, curriculum learning, and policy fine-tuning
|
| 55 |
+
- **`samples/manipulation/`** - Robot manipulation (649 lines) including pick-and-place (Panda arm) and dexterous manipulation (Leap Hand) with asymmetric actor-critic
|
| 56 |
+
- **`samples/opentrack/`** - Humanoid motion tracking/retargeting (603 lines) including dataset download, training, checkpoint conversion, and video generation
|
| 57 |
+
|
| 58 |
+
Each sample is copied to its own workspace directory (`/data/workspaces/<sample_name>/`) at runtime. Notebooks are only copied if they don't already exist, preserving any user modifications.
|
| 59 |
+
|
| 60 |
+
## Development Commands
|
| 61 |
+
|
| 62 |
+
### Running Locally with Docker
|
| 63 |
+
|
| 64 |
+
```bash
|
| 65 |
+
# Build the container
|
| 66 |
+
docker build -t mujoco-training .
|
| 67 |
+
|
| 68 |
+
# Run with GPU support
|
| 69 |
+
docker run --gpus all -p 7860:7860 mujoco-training
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
### Testing GPU Setup
|
| 73 |
+
|
| 74 |
+
```bash
|
| 75 |
+
# Validate GPU rendering capabilities (run inside container)
|
| 76 |
+
python init_gpu.py
|
| 77 |
+
|
| 78 |
+
# Check NVIDIA driver
|
| 79 |
+
nvidia-smi
|
| 80 |
+
|
| 81 |
+
# Test EGL libraries
|
| 82 |
+
ldconfig -p | grep EGL
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
### JupyterLab Access
|
| 86 |
+
|
| 87 |
+
- Default port: 7860
|
| 88 |
+
- Default token: "huggingface" (set via `JUPYTER_TOKEN` environment variable)
|
| 89 |
+
- Default landing page: `/lab/tree/workspaces/locomotion/locomotion.ipynb`
|
| 90 |
+
- Notebook working directory: `/data` (when deployed as Hugging Face Space)
|
| 91 |
+
|
| 92 |
+
### Persistent Storage and Workspaces
|
| 93 |
+
|
| 94 |
+
When deployed on Hugging Face Spaces, the `/data` directory is backed by persistent storage. At container startup, `start_server.sh` automatically:
|
| 95 |
+
|
| 96 |
+
1. Creates `/data/workspaces/` if it doesn't exist
|
| 97 |
+
2. For each sample in `samples/`, creates `/data/workspaces/<sample_name>/` if it doesn't exist
|
| 98 |
+
3. Copies the `.ipynb` file only if it doesn't already exist in the workspace (preserving user modifications)
|
| 99 |
+
4. Copies any additional files from the sample directory (datasets, scripts, etc.)
|
| 100 |
+
|
| 101 |
+
This ensures:
|
| 102 |
+
- User modifications to notebooks are preserved across container restarts
|
| 103 |
+
- Each sample has its own isolated workspace for generated data, models, and outputs
|
| 104 |
+
- Sample notebooks can include supporting files that are copied to the workspace
|
| 105 |
+
- Users can create additional workspaces in `/data/workspaces/` for their own projects
|
| 106 |
+
|
| 107 |
+
## Critical EGL Configuration
|
| 108 |
+
|
| 109 |
+
The container requires specific EGL configuration for headless GPU rendering:
|
| 110 |
+
|
| 111 |
+
1. **NVIDIA EGL Vendor Config**: Created at `/usr/share/glvnd/egl_vendor.d/10_nvidia.json` pointing to `libEGL_nvidia.so.0`
|
| 112 |
+
2. **Library Path**: `LD_LIBRARY_PATH` includes `/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64`
|
| 113 |
+
3. **Runtime Symlinks**: `start_server.sh` creates symlinks to `libEGL_nvidia.so.0` from mounted NVIDIA directories
|
| 114 |
+
4. **Environment Variables**: `__EGL_VENDOR_LIBRARY_DIRS=/usr/share/glvnd/egl_vendor.d`
|
| 115 |
+
|
| 116 |
+
### Troubleshooting EGL Issues
|
| 117 |
+
|
| 118 |
+
If MuJoCo rendering fails:
|
| 119 |
+
1. Verify NVIDIA drivers: `nvidia-smi` should show GPU info
|
| 120 |
+
2. Check EGL vendor config: `cat /usr/share/glvnd/egl_vendor.d/10_nvidia.json`
|
| 121 |
+
3. Verify library loading: `ldconfig -p | grep EGL`
|
| 122 |
+
4. Run comprehensive diagnostic: `python init_gpu.py`
|
| 123 |
+
5. Check that `MUJOCO_GL=egl` is set: `echo $MUJOCO_GL`
|
| 124 |
+
|
| 125 |
+
## Training Workflows
|
| 126 |
+
|
| 127 |
+
### General MuJoCo Simulation (tutorial.ipynb)
|
| 128 |
+
|
| 129 |
+
Basic simulation loop:
|
| 130 |
+
```python
|
| 131 |
+
import mujoco
|
| 132 |
+
model = mujoco.MjModel.from_xml_string(xml)
|
| 133 |
+
data = mujoco.MjData(model)
|
| 134 |
+
|
| 135 |
+
# Simulation loop
|
| 136 |
+
mujoco.mj_resetData(model, data)
|
| 137 |
+
while data.time < duration:
|
| 138 |
+
mujoco.mj_step(model, data)
|
| 139 |
+
# Read sensors, apply controls, etc.
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
Rendering:
|
| 143 |
+
```python
|
| 144 |
+
with mujoco.Renderer(model, height, width) as renderer:
|
| 145 |
+
mujoco.mj_forward(model, data)
|
| 146 |
+
renderer.update_scene(data, camera="camera_name")
|
| 147 |
+
pixels = renderer.render()
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
### Locomotion Training (locomotion.ipynb)
|
| 151 |
+
|
| 152 |
+
Typical workflow using Brax + MuJoCo Playground:
|
| 153 |
+
|
| 154 |
+
1. **Load environment**: `env = registry.load(env_name)`
|
| 155 |
+
2. **Get config**: `env_cfg = registry.get_default_config(env_name)`
|
| 156 |
+
3. **Configure PPO**: `ppo_params = locomotion_params.brax_ppo_config(env_name)`
|
| 157 |
+
4. **Apply domain randomization**: `randomizer = registry.get_domain_randomizer(env_name)`
|
| 158 |
+
5. **Train**: Use `brax.training.agents.ppo.train` with the environment and randomization function
|
| 159 |
+
6. **Save checkpoints**: Policies saved to `checkpoints/{env_name}/{step}/`
|
| 160 |
+
7. **Fine-tune**: Restore from checkpoint and continue training with modified config
|
| 161 |
+
|
| 162 |
+
Available environments:
|
| 163 |
+
- **Quadrupedal**: Go1JoystickFlatTerrain, Go1JoystickRoughTerrain, Go1Getup, Go1Handstand, Go1Footstand, SpotFlatTerrainJoystick, SpotGetup, SpotJoystickGaitTracking, BarkourJoystick
|
| 164 |
+
- **Bipedal**: BerkeleyHumanoidJoystickFlatTerrain, BerkeleyHumanoidJoystickRoughTerrain, G1JoystickFlatTerrain, G1JoystickRoughTerrain, H1InplaceGaitTracking, H1JoystickGaitTracking, Op3Joystick, T1JoystickFlatTerrain, T1JoystickRoughTerrain
|
| 165 |
+
|
| 166 |
+
Full list: `registry.locomotion.ALL_ENVS`
|
| 167 |
+
|
| 168 |
+
Key training techniques:
|
| 169 |
+
- **Domain Randomization**: Randomizes friction, armature, center of mass, link masses for sim-to-real transfer
|
| 170 |
+
- **Energy Penalties**: `energy_termination_threshold`, `reward_config.energy`, `reward_config.dof_acc` to control power consumption and smoothness
|
| 171 |
+
- **Curriculum Learning**: Fine-tune from checkpoints with progressively modified reward configs
|
| 172 |
+
- **Asymmetric Actor-Critic**: Actor receives proprioception, critic receives privileged simulation state
|
| 173 |
+
|
| 174 |
+
### Manipulation Training (manipulation.ipynb)
|
| 175 |
+
|
| 176 |
+
Similar to locomotion but focuses on:
|
| 177 |
+
- **Pick-and-place tasks**: PandaPickCubeOrientation (trains in ~3 minutes on RTX 4090)
|
| 178 |
+
- **Dexterous manipulation**: LeapCubeReorient (trains in ~33 minutes on RTX 4090)
|
| 179 |
+
- **Asymmetric observations**: Use `policy_obs_key` and `value_obs_key` in PPO params to train actor on sensor-like data while critic gets privileged state
|
| 180 |
+
|
| 181 |
+
Available environments: `registry.manipulation.ALL_ENVS`
|
| 182 |
+
|
| 183 |
+
### Motion Tracking (opentrack.ipynb)
|
| 184 |
+
|
| 185 |
+
OpenTrack workflow for humanoid motion tracking:
|
| 186 |
+
1. **Clone repository**: `git clone https://github.com/GalaxyGeneralRobotics/OpenTrack.git`
|
| 187 |
+
2. **Download mocap data**: From `huggingface.co/datasets/robfiras/loco-mujoco-datasets` (Lafan1/UnitreeG1)
|
| 188 |
+
3. **Train policy**: `python train_policy.py --exp_name debug --terrain_type flat_terrain`
|
| 189 |
+
4. **Convert checkpoint**: `python brax2torch.py --exp_name <exp_name>` (Brax β PyTorch)
|
| 190 |
+
5. **Generate videos**: `python play_policy.py --exp_name <exp_name> --use_renderer`
|
| 191 |
+
|
| 192 |
+
## Python Dependencies
|
| 193 |
+
|
| 194 |
+
Core stack (see `requirements.txt`):
|
| 195 |
+
- **JupyterLab**: 4.4.3 (with tornado 6.2 for compatibility)
|
| 196 |
+
- **JAX**: CUDA 12 support via `jax[cuda12]`
|
| 197 |
+
- **MuJoCo**: 3.3+ with MuJoCo MJX (JAX-based physics)
|
| 198 |
+
- **Brax**: JAX-based RL framework for massively parallel training
|
| 199 |
+
- **MuJoCo Playground**: Collection of robot environments and training utilities
|
| 200 |
+
- **Supporting libraries**: mediapy (video rendering), ipywidgets, nvidia-cusparse-cu12
|
| 201 |
+
|
| 202 |
+
## File Structure
|
| 203 |
+
|
| 204 |
+
```
|
| 205 |
+
/
|
| 206 |
+
βββ Dockerfile # Container with CUDA 12.8 + EGL setup
|
| 207 |
+
βββ start_server.sh # Container entrypoint
|
| 208 |
+
βββ init_gpu.py # GPU validation script (comprehensive EGL tests)
|
| 209 |
+
βββ init_mujoco.py # MuJoCo Playground asset downloader
|
| 210 |
+
βββ requirements.txt # Python dependencies
|
| 211 |
+
βββ packages.txt # System packages (currently empty)
|
| 212 |
+
βββ on_startup.sh # Custom startup commands (placeholder)
|
| 213 |
+
βββ login.html # Custom JupyterLab login page
|
| 214 |
+
βββ samples/ # Example notebooks (organized by topic)
|
| 215 |
+
βββ tutorial/
|
| 216 |
+
β βββ tutorial.ipynb # MuJoCo fundamentals (2258 lines)
|
| 217 |
+
βββ locomotion/
|
| 218 |
+
β βββ locomotion.ipynb # Robot locomotion (1762 lines)
|
| 219 |
+
βββ manipulation/
|
| 220 |
+
β βββ manipulation.ipynb # Robot manipulation (649 lines)
|
| 221 |
+
βββ opentrack/
|
| 222 |
+
βββ opentrack.ipynb # Motion tracking (603 lines)
|
| 223 |
+
```
|
| 224 |
+
|
| 225 |
+
When deployed as a Hugging Face Space with persistent storage:
|
| 226 |
+
```
|
| 227 |
+
/data/ # Persistent storage volume (mounted at runtime)
|
| 228 |
+
βββ workspaces/ # Sample workspaces (created by start_server.sh)
|
| 229 |
+
βββ tutorial/
|
| 230 |
+
β βββ tutorial.ipynb # Copied from samples/, preserves user edits
|
| 231 |
+
β βββ ... # User-generated data, models, outputs
|
| 232 |
+
βββ locomotion/
|
| 233 |
+
β βββ locomotion.ipynb
|
| 234 |
+
β βββ checkpoints/ # Training checkpoints
|
| 235 |
+
β βββ ...
|
| 236 |
+
βββ manipulation/
|
| 237 |
+
β βββ manipulation.ipynb
|
| 238 |
+
β βββ ...
|
| 239 |
+
βββ opentrack/
|
| 240 |
+
βββ opentrack.ipynb
|
| 241 |
+
βββ datasets/ # Downloaded mocap data
|
| 242 |
+
βββ models/ # Trained models
|
| 243 |
+
βββ videos/ # Generated videos
|
| 244 |
+
```
|
| 245 |
+
|
| 246 |
+
## Performance Notes
|
| 247 |
+
|
| 248 |
+
- **Physics simulation**: Can achieve 50,000+ Hz on single GPU with JAX/MJX (much faster than rendering)
|
| 249 |
+
- **Rendering**: Typically 30-60 Hz, much slower than physics
|
| 250 |
+
- **Training times** (on RTX 4090 / L40S):
|
| 251 |
+
- Simple manipulation: 3 minutes
|
| 252 |
+
- Quadrupedal joystick: 7 minutes
|
| 253 |
+
- Bipedal locomotion: 17 minutes
|
| 254 |
+
- Dexterous manipulation: 33 minutes
|
| 255 |
+
- **Brax parallelization**: Uses thousands of parallel environments for fast training
|
| 256 |
+
- **Checkpointing**: Critical for curriculum learning and fine-tuning
|
| 257 |
+
|
| 258 |
+
## Common Patterns
|
| 259 |
+
|
| 260 |
+
### Visualization Options
|
| 261 |
+
|
| 262 |
+
```python
|
| 263 |
+
scene_option = mujoco.MjvOption()
|
| 264 |
+
scene_option.flags[mujoco.mjtVisFlag.mjVIS_JOINT] = True # Show joints
|
| 265 |
+
scene_option.flags[mujoco.mjtVisFlag.mjVIS_CONTACTPOINT] = True # Show contacts
|
| 266 |
+
scene_option.flags[mujoco.mjtVisFlag.mjVIS_CONTACTFORCE] = True # Show forces
|
| 267 |
+
scene_option.flags[mujoco.mjtVisFlag.mjVIS_TRANSPARENT] = True # Transparency
|
| 268 |
+
scene_option.flags[mujoco.mjtVisFlag.mjVIS_PERTFORCE] = True # Show perturbations
|
| 269 |
+
```
|
| 270 |
+
|
| 271 |
+
### Named Access Pattern
|
| 272 |
+
|
| 273 |
+
```python
|
| 274 |
+
# Instead of using indices
|
| 275 |
+
model.geom_rgba[geom_id, :]
|
| 276 |
+
|
| 277 |
+
# Use named access
|
| 278 |
+
model.geom('green_sphere').rgba
|
| 279 |
+
data.geom('box').xpos
|
| 280 |
+
data.joint('swing').qpos
|
| 281 |
+
data.sensor('accelerometer').data
|
| 282 |
+
```
|
| 283 |
+
|
| 284 |
+
### Rendering Modes
|
| 285 |
+
|
| 286 |
+
- **RGB rendering**: `renderer.render()` - returns pixels
|
| 287 |
+
- **Depth rendering**: `renderer.enable_depth_rendering()` then `renderer.render()`
|
| 288 |
+
- **Segmentation**: `renderer.enable_segmentation_rendering()` - returns object IDs and types
|
| 289 |
+
|
| 290 |
+
## Important Notes
|
| 291 |
+
|
| 292 |
+
- This is designed for Hugging Face Spaces with GPU instances (NVIDIA L40S or similar)
|
| 293 |
+
- All training uses JAX/Brax for massive parallelization across thousands of environments
|
| 294 |
+
- Policies are typically saved using Orbax checkpointing for fine-tuning
|
| 295 |
+
- Domain randomization is critical for sim-to-real transfer
|
| 296 |
+
- The environment supports multiple RL algorithms (PPO, SAC) through Brax
|
| 297 |
+
- Asymmetric actor-critic (different observations for policy and value function) is commonly used
|
Dockerfile
CHANGED
|
@@ -167,11 +167,17 @@ RUN --mount=target=requirements.txt,source=requirements.txt \
|
|
| 167 |
# Copy the current directory contents into the container at $HOME/app setting the owner to the user
|
| 168 |
COPY --chown=user . $HOME/app
|
| 169 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 170 |
RUN chmod +x start_server.sh
|
| 171 |
|
| 172 |
COPY --chown=user login.html /home/user/miniconda/lib/python3.13/site-packages/jupyter_server/templates/login.html
|
| 173 |
|
| 174 |
-
COPY --chown=user
|
|
|
|
|
|
|
| 175 |
|
| 176 |
ENV PYTHONUNBUFFERED=1 \
|
| 177 |
GRADIO_ALLOW_FLAGGING=never \
|
|
@@ -179,6 +185,8 @@ ENV PYTHONUNBUFFERED=1 \
|
|
| 179 |
GRADIO_SERVER_NAME=0.0.0.0 \
|
| 180 |
GRADIO_THEME=huggingface \
|
| 181 |
SYSTEM=spaces \
|
| 182 |
-
SHELL=/bin/bash
|
|
|
|
|
|
|
| 183 |
|
| 184 |
CMD ["./start_server.sh"]
|
|
|
|
| 167 |
# Copy the current directory contents into the container at $HOME/app setting the owner to the user
|
| 168 |
COPY --chown=user . $HOME/app
|
| 169 |
|
| 170 |
+
# Set up OpenTrack (clone repo, install dependencies)
|
| 171 |
+
RUN chmod +x $HOME/app/samples/opentrack/setup.sh && \
|
| 172 |
+
bash $HOME/app/samples/opentrack/setup.sh
|
| 173 |
+
|
| 174 |
RUN chmod +x start_server.sh
|
| 175 |
|
| 176 |
COPY --chown=user login.html /home/user/miniconda/lib/python3.13/site-packages/jupyter_server/templates/login.html
|
| 177 |
|
| 178 |
+
# Note: samples/ are already copied via "COPY --chown=user . $HOME/app" above
|
| 179 |
+
# They will be copied to /data/workspaces/ at runtime by start_server.sh
|
| 180 |
+
# (We can't copy to /data during build because the volume is only mounted at runtime)
|
| 181 |
|
| 182 |
ENV PYTHONUNBUFFERED=1 \
|
| 183 |
GRADIO_ALLOW_FLAGGING=never \
|
|
|
|
| 185 |
GRADIO_SERVER_NAME=0.0.0.0 \
|
| 186 |
GRADIO_THEME=huggingface \
|
| 187 |
SYSTEM=spaces \
|
| 188 |
+
SHELL=/bin/bash \
|
| 189 |
+
JUPYTERLAB_WORKSPACES_DIR=/data/.jupyter/workspaces \
|
| 190 |
+
JUPYTERLAB_SETTINGS_DIR=/data/.jupyter/settings
|
| 191 |
|
| 192 |
CMD ["./start_server.sh"]
|
samples/{locomotion.ipynb β locomotion/locomotion.ipynb}
RENAMED
|
File without changes
|
samples/{manipulation.ipynb β manipulation/manipulation.ipynb}
RENAMED
|
File without changes
|
samples/opentrack/init_opentrack.sh
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
# OpenTrack runtime initialization script
|
| 3 |
+
# This script downloads datasets and sets up symlinks at container startup
|
| 4 |
+
|
| 5 |
+
set -e # Exit on error
|
| 6 |
+
|
| 7 |
+
echo "π€ Initializing OpenTrack workspace..."
|
| 8 |
+
echo "======================================"
|
| 9 |
+
|
| 10 |
+
# Define paths
|
| 11 |
+
WORKSPACE="/data/workspaces/opentrack"
|
| 12 |
+
DATASETS_DIR="$WORKSPACE/datasets"
|
| 13 |
+
MODELS_DIR="$WORKSPACE/models"
|
| 14 |
+
VIDEOS_DIR="$WORKSPACE/videos"
|
| 15 |
+
OPENTRACK_REPO="$HOME/OpenTrack"
|
| 16 |
+
|
| 17 |
+
# Create workspace directories if they don't exist
|
| 18 |
+
mkdir -p "$DATASETS_DIR"
|
| 19 |
+
mkdir -p "$MODELS_DIR"
|
| 20 |
+
mkdir -p "$VIDEOS_DIR"
|
| 21 |
+
echo "β Workspace directories ready"
|
| 22 |
+
|
| 23 |
+
# Download mocap datasets if not already present
|
| 24 |
+
MOCAP_DIR="$DATASETS_DIR/lafan1/UnitreeG1"
|
| 25 |
+
mkdir -p "$MOCAP_DIR"
|
| 26 |
+
|
| 27 |
+
if [ -z "$(ls -A "$MOCAP_DIR" 2>/dev/null)" ]; then
|
| 28 |
+
echo ""
|
| 29 |
+
echo "π₯ Downloading motion capture datasets..."
|
| 30 |
+
echo " This may take a few minutes on first run..."
|
| 31 |
+
|
| 32 |
+
# Use Python with huggingface_hub to download
|
| 33 |
+
python3 << 'EOF'
|
| 34 |
+
import sys
|
| 35 |
+
from pathlib import Path
|
| 36 |
+
|
| 37 |
+
try:
|
| 38 |
+
from huggingface_hub import snapshot_download
|
| 39 |
+
|
| 40 |
+
datasets_dir = Path("$DATASETS_DIR")
|
| 41 |
+
|
| 42 |
+
print(" Downloading from robfiras/loco-mujoco-datasets...")
|
| 43 |
+
snapshot_path = snapshot_download(
|
| 44 |
+
repo_id="robfiras/loco-mujoco-datasets",
|
| 45 |
+
repo_type="dataset",
|
| 46 |
+
allow_patterns="Lafan1/mocap/UnitreeG1/*.npz",
|
| 47 |
+
local_dir=str(datasets_dir),
|
| 48 |
+
local_dir_use_symlinks=False
|
| 49 |
+
)
|
| 50 |
+
|
| 51 |
+
# Count downloaded files
|
| 52 |
+
mocap_dir = datasets_dir / "lafan1" / "UnitreeG1"
|
| 53 |
+
npz_files = list(mocap_dir.glob("*.npz"))
|
| 54 |
+
print(f" β Downloaded {len(npz_files)} motion capture files")
|
| 55 |
+
|
| 56 |
+
except ImportError:
|
| 57 |
+
print(" β οΈ huggingface_hub not available, skipping dataset download", file=sys.stderr)
|
| 58 |
+
print(" You can download datasets manually from:", file=sys.stderr)
|
| 59 |
+
print(" https://huggingface.co/datasets/robfiras/loco-mujoco-datasets", file=sys.stderr)
|
| 60 |
+
except Exception as e:
|
| 61 |
+
print(f" β οΈ Error downloading datasets: {e}", file=sys.stderr)
|
| 62 |
+
print(" You may need to run: huggingface-cli login", file=sys.stderr)
|
| 63 |
+
EOF
|
| 64 |
+
|
| 65 |
+
echo "β Dataset download completed"
|
| 66 |
+
else
|
| 67 |
+
# Count existing files
|
| 68 |
+
FILE_COUNT=$(find "$MOCAP_DIR" -name "*.npz" | wc -l | tr -d ' ')
|
| 69 |
+
echo "β Found existing mocap datasets ($FILE_COUNT files)"
|
| 70 |
+
fi
|
| 71 |
+
|
| 72 |
+
# Set up symlinks for OpenTrack
|
| 73 |
+
if [ -d "$OPENTRACK_REPO" ]; then
|
| 74 |
+
echo ""
|
| 75 |
+
echo "π Setting up OpenTrack symlinks..."
|
| 76 |
+
|
| 77 |
+
# Create symlink from OpenTrack/data/mocap to datasets
|
| 78 |
+
OPENTRACK_DATA_DIR="$OPENTRACK_REPO/data/mocap"
|
| 79 |
+
mkdir -p "$(dirname "$OPENTRACK_DATA_DIR")"
|
| 80 |
+
|
| 81 |
+
if [ -L "$OPENTRACK_DATA_DIR" ]; then
|
| 82 |
+
rm "$OPENTRACK_DATA_DIR"
|
| 83 |
+
elif [ -d "$OPENTRACK_DATA_DIR" ]; then
|
| 84 |
+
rm -rf "$OPENTRACK_DATA_DIR"
|
| 85 |
+
fi
|
| 86 |
+
|
| 87 |
+
ln -s "$DATASETS_DIR" "$OPENTRACK_DATA_DIR"
|
| 88 |
+
echo " β $OPENTRACK_DATA_DIR -> $DATASETS_DIR"
|
| 89 |
+
|
| 90 |
+
# Create symlink from OpenTrack/logs to models
|
| 91 |
+
OPENTRACK_LOGS_DIR="$OPENTRACK_REPO/logs"
|
| 92 |
+
|
| 93 |
+
if [ -L "$OPENTRACK_LOGS_DIR" ]; then
|
| 94 |
+
rm "$OPENTRACK_LOGS_DIR"
|
| 95 |
+
elif [ -d "$OPENTRACK_LOGS_DIR" ]; then
|
| 96 |
+
# Move existing logs to MODELS_DIR
|
| 97 |
+
if [ "$(ls -A "$OPENTRACK_LOGS_DIR" 2>/dev/null)" ]; then
|
| 98 |
+
echo " Moving existing logs to $MODELS_DIR..."
|
| 99 |
+
mv "$OPENTRACK_LOGS_DIR"/* "$MODELS_DIR/" 2>/dev/null || true
|
| 100 |
+
fi
|
| 101 |
+
rm -rf "$OPENTRACK_LOGS_DIR"
|
| 102 |
+
fi
|
| 103 |
+
|
| 104 |
+
ln -s "$MODELS_DIR" "$OPENTRACK_LOGS_DIR"
|
| 105 |
+
echo " β $OPENTRACK_LOGS_DIR -> $MODELS_DIR"
|
| 106 |
+
else
|
| 107 |
+
echo "β οΈ OpenTrack repository not found at $OPENTRACK_REPO"
|
| 108 |
+
fi
|
| 109 |
+
|
| 110 |
+
echo ""
|
| 111 |
+
echo "======================================"
|
| 112 |
+
echo "β
OpenTrack workspace initialized!"
|
| 113 |
+
echo ""
|
| 114 |
+
echo " Datasets: $DATASETS_DIR"
|
| 115 |
+
echo " Models: $MODELS_DIR"
|
| 116 |
+
echo " Videos: $VIDEOS_DIR"
|
| 117 |
+
echo ""
|
samples/{opentrack.ipynb β opentrack/opentrack.ipynb}
RENAMED
|
File without changes
|
samples/opentrack/opentrack_quickstart.ipynb
ADDED
|
@@ -0,0 +1,381 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "markdown",
|
| 5 |
+
"metadata": {},
|
| 6 |
+
"source": [
|
| 7 |
+
"# OpenTrack Quickstart\n",
|
| 8 |
+
"\n",
|
| 9 |
+
"This simplified notebook lets you jump straight into training humanoid motion tracking policies with OpenTrack!\n",
|
| 10 |
+
"\n",
|
| 11 |
+
"**Everything is already set up:**\n",
|
| 12 |
+
"- β
OpenTrack repository cloned\n",
|
| 13 |
+
"- β
PyTorch and dependencies installed\n",
|
| 14 |
+
"- β
Motion capture datasets downloaded\n",
|
| 15 |
+
"- β
Workspace directories created\n",
|
| 16 |
+
"\n",
|
| 17 |
+
"Just run the cells and enjoy! π"
|
| 18 |
+
]
|
| 19 |
+
},
|
| 20 |
+
{
|
| 21 |
+
"cell_type": "markdown",
|
| 22 |
+
"metadata": {},
|
| 23 |
+
"source": [
|
| 24 |
+
"## Setup\n",
|
| 25 |
+
"\n",
|
| 26 |
+
"First, let's set up our workspace paths and helper functions:"
|
| 27 |
+
]
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"cell_type": "code",
|
| 31 |
+
"execution_count": null,
|
| 32 |
+
"metadata": {},
|
| 33 |
+
"outputs": [],
|
| 34 |
+
"source": [
|
| 35 |
+
"import os\n",
|
| 36 |
+
"import subprocess\n",
|
| 37 |
+
"import time\n",
|
| 38 |
+
"from pathlib import Path\n",
|
| 39 |
+
"from IPython.display import Video, display, HTML\n",
|
| 40 |
+
"\n",
|
| 41 |
+
"# Workspace paths (already set up by container initialization)\n",
|
| 42 |
+
"WORKSPACE = Path(\"/data/workspaces/opentrack\")\n",
|
| 43 |
+
"DATASETS_DIR = WORKSPACE / \"datasets\"\n",
|
| 44 |
+
"MODELS_DIR = WORKSPACE / \"models\"\n",
|
| 45 |
+
"VIDEOS_DIR = WORKSPACE / \"videos\"\n",
|
| 46 |
+
"OPENTRACK_REPO = Path.home() / \"OpenTrack\"\n",
|
| 47 |
+
"\n",
|
| 48 |
+
"# Change to OpenTrack directory\n",
|
| 49 |
+
"os.chdir(OPENTRACK_REPO)\n",
|
| 50 |
+
"\n",
|
| 51 |
+
"print(\"π Workspace directories:\")\n",
|
| 52 |
+
"print(f\" Datasets: {DATASETS_DIR}\")\n",
|
| 53 |
+
"print(f\" Models: {MODELS_DIR}\")\n",
|
| 54 |
+
"print(f\" Videos: {VIDEOS_DIR}\")\n",
|
| 55 |
+
"print(f\"\\nβ Working directory: {os.getcwd()}\")\n",
|
| 56 |
+
"\n",
|
| 57 |
+
"# Check if datasets exist\n",
|
| 58 |
+
"mocap_files = list((DATASETS_DIR / \"lafan1\" / \"UnitreeG1\").glob(\"*.npz\"))\n",
|
| 59 |
+
"print(f\"\\nβ Found {len(mocap_files)} motion capture files\")"
|
| 60 |
+
]
|
| 61 |
+
},
|
| 62 |
+
{
|
| 63 |
+
"cell_type": "code",
|
| 64 |
+
"execution_count": null,
|
| 65 |
+
"metadata": {},
|
| 66 |
+
"outputs": [],
|
| 67 |
+
"source": [
|
| 68 |
+
"# Helper function to run OpenTrack commands\n",
|
| 69 |
+
"def run_opentrack_command(cmd_args, description=\"Running command\"):\n",
|
| 70 |
+
" \"\"\"Run an OpenTrack command and display output\"\"\"\n",
|
| 71 |
+
" print(f\"\\n{'='*60}\")\n",
|
| 72 |
+
" print(f\"π {description}\")\n",
|
| 73 |
+
" print(f\" Command: python {' '.join(cmd_args)}\")\n",
|
| 74 |
+
" print(f\"{'='*60}\\n\")\n",
|
| 75 |
+
" \n",
|
| 76 |
+
" result = subprocess.run(\n",
|
| 77 |
+
" ['python'] + cmd_args,\n",
|
| 78 |
+
" capture_output=False,\n",
|
| 79 |
+
" text=True\n",
|
| 80 |
+
" )\n",
|
| 81 |
+
" \n",
|
| 82 |
+
" if result.returncode == 0:\n",
|
| 83 |
+
" print(f\"\\nβ
{description} completed successfully!\")\n",
|
| 84 |
+
" else:\n",
|
| 85 |
+
" print(f\"\\nβ οΈ {description} exited with code {result.returncode}\")\n",
|
| 86 |
+
" \n",
|
| 87 |
+
" return result.returncode\n",
|
| 88 |
+
"\n",
|
| 89 |
+
"# Helper to find latest experiment\n",
|
| 90 |
+
"def find_latest_experiment(pattern=''):\n",
|
| 91 |
+
" \"\"\"Find the most recent experiment folder\"\"\"\n",
|
| 92 |
+
" experiments = [d for d in MODELS_DIR.iterdir() if d.is_dir() and pattern in d.name]\n",
|
| 93 |
+
" if not experiments:\n",
|
| 94 |
+
" return None\n",
|
| 95 |
+
" return sorted(experiments, key=lambda x: x.stat().st_mtime, reverse=True)[0].name\n",
|
| 96 |
+
"\n",
|
| 97 |
+
"print(\"β Helper functions loaded\")"
|
| 98 |
+
]
|
| 99 |
+
},
|
| 100 |
+
{
|
| 101 |
+
"cell_type": "markdown",
|
| 102 |
+
"metadata": {},
|
| 103 |
+
"source": [
|
| 104 |
+
"## Quick Training (Debug Mode)\n",
|
| 105 |
+
"\n",
|
| 106 |
+
"Let's train a quick policy in debug mode to verify everything works. This takes just a few minutes:\n",
|
| 107 |
+
"\n",
|
| 108 |
+
"**Parameters:**\n",
|
| 109 |
+
"- `--exp_name debug` - Name for this experiment\n",
|
| 110 |
+
"- `--terrain_type flat_terrain` - Train on flat ground"
|
| 111 |
+
]
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"cell_type": "code",
|
| 115 |
+
"execution_count": null,
|
| 116 |
+
"metadata": {},
|
| 117 |
+
"outputs": [],
|
| 118 |
+
"source": [
|
| 119 |
+
"%%time\n",
|
| 120 |
+
"\n",
|
| 121 |
+
"run_opentrack_command(\n",
|
| 122 |
+
" ['train_policy.py', '--exp_name', 'quickstart_debug', '--terrain_type', 'flat_terrain'],\n",
|
| 123 |
+
" description=\"Training OpenTrack policy (debug mode)\"\n",
|
| 124 |
+
")\n",
|
| 125 |
+
"\n",
|
| 126 |
+
"# Find the experiment\n",
|
| 127 |
+
"exp_folder = find_latest_experiment('quickstart_debug')\n",
|
| 128 |
+
"if exp_folder:\n",
|
| 129 |
+
" print(f\"\\nπ¦ Experiment saved: {exp_folder}\")\n",
|
| 130 |
+
" print(f\" Location: {MODELS_DIR / exp_folder}\")"
|
| 131 |
+
]
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"cell_type": "markdown",
|
| 135 |
+
"metadata": {},
|
| 136 |
+
"source": [
|
| 137 |
+
"## Convert Checkpoint (Brax β PyTorch)\n",
|
| 138 |
+
"\n",
|
| 139 |
+
"OpenTrack trains using Brax (JAX-based), but we need to convert the checkpoint to PyTorch for deployment:"
|
| 140 |
+
]
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"cell_type": "code",
|
| 144 |
+
"execution_count": null,
|
| 145 |
+
"metadata": {},
|
| 146 |
+
"outputs": [],
|
| 147 |
+
"source": [
|
| 148 |
+
"exp_folder = find_latest_experiment('quickstart_debug')\n",
|
| 149 |
+
"\n",
|
| 150 |
+
"if exp_folder:\n",
|
| 151 |
+
" run_opentrack_command(\n",
|
| 152 |
+
" ['brax2torch.py', '--exp_name', exp_folder],\n",
|
| 153 |
+
" description=\"Converting Brax checkpoint to PyTorch\"\n",
|
| 154 |
+
" )\n",
|
| 155 |
+
"else:\n",
|
| 156 |
+
" print(\"β οΈ No experiment found. Please run training first.\")"
|
| 157 |
+
]
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"cell_type": "markdown",
|
| 161 |
+
"metadata": {},
|
| 162 |
+
"source": [
|
| 163 |
+
"## Generate Videos\n",
|
| 164 |
+
"\n",
|
| 165 |
+
"Now let's visualize the policy by generating videos using MuJoCo's headless renderer:"
|
| 166 |
+
]
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"cell_type": "code",
|
| 170 |
+
"execution_count": null,
|
| 171 |
+
"metadata": {},
|
| 172 |
+
"outputs": [],
|
| 173 |
+
"source": [
|
| 174 |
+
"exp_folder = find_latest_experiment('quickstart_debug')\n",
|
| 175 |
+
"\n",
|
| 176 |
+
"if exp_folder:\n",
|
| 177 |
+
" print(f\"π¬ Generating videos for experiment: {exp_folder}\")\n",
|
| 178 |
+
" print(f\" Videos will be saved to: {VIDEOS_DIR}\\n\")\n",
|
| 179 |
+
" \n",
|
| 180 |
+
" run_opentrack_command(\n",
|
| 181 |
+
" ['play_policy.py', '--exp_name', exp_folder, '--use_renderer'],\n",
|
| 182 |
+
" description=\"Generating videos with MuJoCo renderer\"\n",
|
| 183 |
+
" )\n",
|
| 184 |
+
" \n",
|
| 185 |
+
" # Give it a moment to finish writing files\n",
|
| 186 |
+
" time.sleep(2)\n",
|
| 187 |
+
" \n",
|
| 188 |
+
" # Find generated videos\n",
|
| 189 |
+
" videos = list(VIDEOS_DIR.glob(\"*.mp4\")) + list(VIDEOS_DIR.glob(\"*.gif\"))\n",
|
| 190 |
+
" \n",
|
| 191 |
+
" if videos:\n",
|
| 192 |
+
" print(f\"\\nβ
Generated {len(videos)} video(s):\")\n",
|
| 193 |
+
" for v in sorted(videos, key=lambda x: x.stat().st_mtime, reverse=True):\n",
|
| 194 |
+
" print(f\" - {v.name}\")\n",
|
| 195 |
+
" else:\n",
|
| 196 |
+
" print(\"\\nβ οΈ No videos found. They might be in the experiment folder.\")\n",
|
| 197 |
+
"else:\n",
|
| 198 |
+
" print(\"β οΈ No experiment found. Please run training first.\")"
|
| 199 |
+
]
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"cell_type": "markdown",
|
| 203 |
+
"metadata": {},
|
| 204 |
+
"source": [
|
| 205 |
+
"## Display Videos\n",
|
| 206 |
+
"\n",
|
| 207 |
+
"Let's watch the trained policy in action:"
|
| 208 |
+
]
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"cell_type": "code",
|
| 212 |
+
"execution_count": null,
|
| 213 |
+
"metadata": {},
|
| 214 |
+
"outputs": [],
|
| 215 |
+
"source": [
|
| 216 |
+
"# Find all videos\n",
|
| 217 |
+
"videos = list(VIDEOS_DIR.glob(\"*.mp4\")) + list(VIDEOS_DIR.glob(\"*.gif\"))\n",
|
| 218 |
+
"videos = sorted(videos, key=lambda x: x.stat().st_mtime, reverse=True)\n",
|
| 219 |
+
"\n",
|
| 220 |
+
"if not videos:\n",
|
| 221 |
+
" # Search in experiment folders too\n",
|
| 222 |
+
" videos = list(MODELS_DIR.glob(\"**/*.mp4\")) + list(MODELS_DIR.glob(\"**/*.gif\"))\n",
|
| 223 |
+
" videos = sorted(videos, key=lambda x: x.stat().st_mtime, reverse=True)\n",
|
| 224 |
+
"\n",
|
| 225 |
+
"if videos:\n",
|
| 226 |
+
" print(f\"π₯ Found {len(videos)} video(s). Displaying...\\n\")\n",
|
| 227 |
+
" \n",
|
| 228 |
+
" for i, video_path in enumerate(videos[:3]): # Show up to 3 most recent\n",
|
| 229 |
+
" print(f\"\\n{'='*60}\")\n",
|
| 230 |
+
" print(f\"Video {i+1}: {video_path.name}\")\n",
|
| 231 |
+
" print(f\"{'='*60}\")\n",
|
| 232 |
+
" \n",
|
| 233 |
+
" try:\n",
|
| 234 |
+
" if video_path.suffix == '.mp4':\n",
|
| 235 |
+
" display(Video(str(video_path), width=800, embed=True))\n",
|
| 236 |
+
" elif video_path.suffix == '.gif':\n",
|
| 237 |
+
" display(HTML(f'<img src=\"{video_path}\" width=\"800\">'))\n",
|
| 238 |
+
" except Exception as e:\n",
|
| 239 |
+
" print(f\"β οΈ Error displaying video: {e}\")\n",
|
| 240 |
+
" print(f\" You can access it at: {video_path}\")\n",
|
| 241 |
+
"else:\n",
|
| 242 |
+
" print(\"β οΈ No videos found.\")\n",
|
| 243 |
+
" print(\"\\nMake sure you:\")\n",
|
| 244 |
+
" print(\" 1. Trained a policy\")\n",
|
| 245 |
+
" print(\" 2. Converted the checkpoint\")\n",
|
| 246 |
+
" print(\" 3. Generated videos\")"
|
| 247 |
+
]
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"cell_type": "markdown",
|
| 251 |
+
"metadata": {},
|
| 252 |
+
"source": [
|
| 253 |
+
"## Next Steps\n",
|
| 254 |
+
"\n",
|
| 255 |
+
"### Train on Rough Terrain\n",
|
| 256 |
+
"\n",
|
| 257 |
+
"Generate terrain and train a more robust policy:"
|
| 258 |
+
]
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"cell_type": "code",
|
| 262 |
+
"execution_count": null,
|
| 263 |
+
"metadata": {},
|
| 264 |
+
"outputs": [],
|
| 265 |
+
"source": [
|
| 266 |
+
"# Generate rough terrain\n",
|
| 267 |
+
"run_opentrack_command(\n",
|
| 268 |
+
" ['generate_terrain.py'],\n",
|
| 269 |
+
" description=\"Generating rough terrain\"\n",
|
| 270 |
+
")\n",
|
| 271 |
+
"\n",
|
| 272 |
+
"print(\"\\nβ Terrain generated!\")\n",
|
| 273 |
+
"print(\" You can now train with: --terrain_type rough_terrain\")"
|
| 274 |
+
]
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"cell_type": "code",
|
| 278 |
+
"execution_count": null,
|
| 279 |
+
"metadata": {},
|
| 280 |
+
"outputs": [],
|
| 281 |
+
"source": [
|
| 282 |
+
"# Train on rough terrain\n",
|
| 283 |
+
"run_opentrack_command(\n",
|
| 284 |
+
" ['train_policy.py', '--exp_name', 'rough_terrain', '--terrain_type', 'rough_terrain'],\n",
|
| 285 |
+
" description=\"Training on rough terrain\"\n",
|
| 286 |
+
")"
|
| 287 |
+
]
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"cell_type": "markdown",
|
| 291 |
+
"metadata": {},
|
| 292 |
+
"source": [
|
| 293 |
+
"### Full Training (Longer, Better Results)\n",
|
| 294 |
+
"\n",
|
| 295 |
+
"For production-quality results, remove the debug flag and train for longer:"
|
| 296 |
+
]
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"cell_type": "code",
|
| 300 |
+
"execution_count": null,
|
| 301 |
+
"metadata": {},
|
| 302 |
+
"outputs": [],
|
| 303 |
+
"source": [
|
| 304 |
+
"# This will take significantly longer but produce better results\n",
|
| 305 |
+
"# run_opentrack_command(\n",
|
| 306 |
+
"# ['train_policy.py', '--exp_name', 'full_training', '--terrain_type', 'flat_terrain'],\n",
|
| 307 |
+
"# description=\"Full training (this takes a while!)\"\n",
|
| 308 |
+
"# )\n",
|
| 309 |
+
"\n",
|
| 310 |
+
"print(\"Uncomment the code above to run full training (takes 20-60 minutes on GPU)\")"
|
| 311 |
+
]
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"cell_type": "markdown",
|
| 315 |
+
"metadata": {},
|
| 316 |
+
"source": [
|
| 317 |
+
"### Play Reference Motion\n",
|
| 318 |
+
"\n",
|
| 319 |
+
"Visualize the original mocap data alongside the policy:"
|
| 320 |
+
]
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"cell_type": "code",
|
| 324 |
+
"execution_count": null,
|
| 325 |
+
"metadata": {},
|
| 326 |
+
"outputs": [],
|
| 327 |
+
"source": [
|
| 328 |
+
"exp_folder = find_latest_experiment()\n",
|
| 329 |
+
"\n",
|
| 330 |
+
"if exp_folder:\n",
|
| 331 |
+
" run_opentrack_command(\n",
|
| 332 |
+
" ['play_policy.py', '--exp_name', exp_folder, '--use_renderer', '--play_ref_motion'],\n",
|
| 333 |
+
" description=\"Generating videos with reference motion comparison\"\n",
|
| 334 |
+
" )\n",
|
| 335 |
+
"else:\n",
|
| 336 |
+
" print(\"β οΈ No experiment found.\")"
|
| 337 |
+
]
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"cell_type": "markdown",
|
| 341 |
+
"metadata": {},
|
| 342 |
+
"source": [
|
| 343 |
+
"## Summary\n",
|
| 344 |
+
"\n",
|
| 345 |
+
"**What we did:**\n",
|
| 346 |
+
"1. β
Trained a humanoid motion tracking policy using OpenTrack\n",
|
| 347 |
+
"2. β
Converted the checkpoint from Brax to PyTorch\n",
|
| 348 |
+
"3. β
Generated videos of the policy in action\n",
|
| 349 |
+
"4. β
Visualized the results\n",
|
| 350 |
+
"\n",
|
| 351 |
+
"**Project Structure:**\n",
|
| 352 |
+
"```\n",
|
| 353 |
+
"/data/workspaces/opentrack/\n",
|
| 354 |
+
"βββ datasets/ # Motion capture data\n",
|
| 355 |
+
"β βββ lafan1/UnitreeG1/*.npz\n",
|
| 356 |
+
"βββ models/ # Trained checkpoints\n",
|
| 357 |
+
"β βββ <timestamp>_<exp_name>/\n",
|
| 358 |
+
"βββ videos/ # Generated videos\n",
|
| 359 |
+
" βββ *.mp4, *.gif\n",
|
| 360 |
+
"```\n",
|
| 361 |
+
"\n",
|
| 362 |
+
"**All data persists** across container restarts, so you can continue training or generate new videos anytime!\n",
|
| 363 |
+
"\n",
|
| 364 |
+
"For more advanced usage, check out the full `opentrack.ipynb` notebook."
|
| 365 |
+
]
|
| 366 |
+
}
|
| 367 |
+
],
|
| 368 |
+
"metadata": {
|
| 369 |
+
"kernelspec": {
|
| 370 |
+
"display_name": "Python 3",
|
| 371 |
+
"language": "python",
|
| 372 |
+
"name": "python3"
|
| 373 |
+
},
|
| 374 |
+
"language_info": {
|
| 375 |
+
"name": "python",
|
| 376 |
+
"version": "3.13.0"
|
| 377 |
+
}
|
| 378 |
+
},
|
| 379 |
+
"nbformat": 4,
|
| 380 |
+
"nbformat_minor": 4
|
| 381 |
+
}
|
samples/opentrack/setup.sh
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
# OpenTrack build-time setup script
|
| 3 |
+
# This script clones OpenTrack and installs its dependencies at Docker build time
|
| 4 |
+
|
| 5 |
+
set -e # Exit on error
|
| 6 |
+
|
| 7 |
+
echo "π€ Setting up OpenTrack at build time..."
|
| 8 |
+
echo "=========================================="
|
| 9 |
+
|
| 10 |
+
# Clone OpenTrack repository
|
| 11 |
+
OPENTRACK_DIR="$HOME/OpenTrack"
|
| 12 |
+
if [ ! -d "$OPENTRACK_DIR" ]; then
|
| 13 |
+
echo "π¦ Cloning OpenTrack repository..."
|
| 14 |
+
git clone https://github.com/GalaxyGeneralRobotics/OpenTrack.git "$OPENTRACK_DIR"
|
| 15 |
+
echo "β Repository cloned to $OPENTRACK_DIR"
|
| 16 |
+
else
|
| 17 |
+
echo "β OpenTrack repository already exists"
|
| 18 |
+
fi
|
| 19 |
+
|
| 20 |
+
# Install PyTorch (CPU version for compatibility)
|
| 21 |
+
echo ""
|
| 22 |
+
echo "π₯ Installing PyTorch..."
|
| 23 |
+
pip install --no-cache-dir \
|
| 24 |
+
torch==2.5.1 \
|
| 25 |
+
torchvision==0.20.1 \
|
| 26 |
+
torchaudio==2.5.1 \
|
| 27 |
+
--index-url https://download.pytorch.org/whl/cpu
|
| 28 |
+
echo "β PyTorch installed"
|
| 29 |
+
|
| 30 |
+
# Install OpenTrack requirements
|
| 31 |
+
echo ""
|
| 32 |
+
echo "π Installing OpenTrack requirements..."
|
| 33 |
+
if [ -f "$OPENTRACK_DIR/requirements.txt" ]; then
|
| 34 |
+
pip install --no-cache-dir -r "$OPENTRACK_DIR/requirements.txt"
|
| 35 |
+
echo "β OpenTrack requirements installed"
|
| 36 |
+
else
|
| 37 |
+
echo "β οΈ Warning: requirements.txt not found in OpenTrack repo"
|
| 38 |
+
fi
|
| 39 |
+
|
| 40 |
+
# Install additional packages for video handling
|
| 41 |
+
echo ""
|
| 42 |
+
echo "π¬ Installing video handling packages..."
|
| 43 |
+
pip install --no-cache-dir imageio imageio-ffmpeg
|
| 44 |
+
echo "β Video packages installed"
|
| 45 |
+
|
| 46 |
+
echo ""
|
| 47 |
+
echo "=========================================="
|
| 48 |
+
echo "β
OpenTrack build-time setup complete!"
|
| 49 |
+
echo ""
|
samples/{tutorial.ipynb β tutorial/tutorial.ipynb}
RENAMED
|
File without changes
|
start_server.sh
CHANGED
|
@@ -19,6 +19,68 @@ python init_gpu.py
|
|
| 19 |
# this will download stuff used by Mujoco (the collection of models)
|
| 20 |
python init_mujoco.py
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
jupyter labextension disable "@jupyterlab/apputils-extension:announcements"
|
| 23 |
|
| 24 |
jupyter-lab \
|
|
@@ -33,4 +95,4 @@ jupyter-lab \
|
|
| 33 |
--LabApp.news_url=None \
|
| 34 |
--LabApp.check_for_updates_class="jupyterlab.NeverCheckForUpdate" \
|
| 35 |
--notebook-dir=$NOTEBOOK_DIR \
|
| 36 |
-
--ServerApp.default_url="/
|
|
|
|
| 19 |
# this will download stuff used by Mujoco (the collection of models)
|
| 20 |
python init_mujoco.py
|
| 21 |
|
| 22 |
+
# Copy sample notebooks to persistent storage
|
| 23 |
+
echo "π Setting up sample workspaces in /data/workspaces..."
|
| 24 |
+
|
| 25 |
+
# Create the workspaces directory if it doesn't exist
|
| 26 |
+
mkdir -p /data/workspaces
|
| 27 |
+
|
| 28 |
+
# Create JupyterLab workspace state directory (for UI preferences, layout, etc.)
|
| 29 |
+
mkdir -p /data/.jupyter/workspaces
|
| 30 |
+
mkdir -p /data/.jupyter/settings
|
| 31 |
+
echo "β JupyterLab workspace state directory: /data/.jupyter/workspaces"
|
| 32 |
+
echo "β JupyterLab settings directory: /data/.jupyter/settings"
|
| 33 |
+
|
| 34 |
+
# Loop through each sample directory
|
| 35 |
+
for sample_dir in $HOME/app/samples/*/; do
|
| 36 |
+
# Get the sample name (e.g., "locomotion" from "samples/locomotion/")
|
| 37 |
+
sample_name=$(basename "$sample_dir")
|
| 38 |
+
|
| 39 |
+
# Create the workspace directory if it doesn't exist
|
| 40 |
+
workspace_dir="/data/workspaces/$sample_name"
|
| 41 |
+
if [ ! -d "$workspace_dir" ]; then
|
| 42 |
+
echo " Creating workspace: $workspace_dir"
|
| 43 |
+
mkdir -p "$workspace_dir"
|
| 44 |
+
else
|
| 45 |
+
echo " Workspace already exists: $workspace_dir"
|
| 46 |
+
fi
|
| 47 |
+
|
| 48 |
+
# Copy the .ipynb file only if it doesn't already exist (to preserve user changes)
|
| 49 |
+
ipynb_file="$sample_dir$sample_name.ipynb"
|
| 50 |
+
dest_ipynb="$workspace_dir/$sample_name.ipynb"
|
| 51 |
+
|
| 52 |
+
if [ -f "$ipynb_file" ]; then
|
| 53 |
+
if [ ! -f "$dest_ipynb" ]; then
|
| 54 |
+
echo " Copying: $sample_name.ipynb β $workspace_dir/"
|
| 55 |
+
cp "$ipynb_file" "$dest_ipynb"
|
| 56 |
+
else
|
| 57 |
+
echo " Preserving existing: $dest_ipynb (user may have made changes)"
|
| 58 |
+
fi
|
| 59 |
+
fi
|
| 60 |
+
|
| 61 |
+
# Copy any other files from the sample directory (excluding .ipynb files)
|
| 62 |
+
# This allows samples to include datasets, scripts, etc.
|
| 63 |
+
for file in "$sample_dir"*; do
|
| 64 |
+
filename=$(basename "$file")
|
| 65 |
+
# Skip if it's the .ipynb file (already handled) or if it's a directory
|
| 66 |
+
if [ "$filename" != "$sample_name.ipynb" ] && [ -f "$file" ]; then
|
| 67 |
+
dest_file="$workspace_dir/$filename"
|
| 68 |
+
if [ ! -f "$dest_file" ]; then
|
| 69 |
+
echo " Copying additional file: $filename β $workspace_dir/"
|
| 70 |
+
cp "$file" "$dest_file"
|
| 71 |
+
fi
|
| 72 |
+
fi
|
| 73 |
+
done
|
| 74 |
+
done
|
| 75 |
+
|
| 76 |
+
echo "β
Sample workspaces ready!"
|
| 77 |
+
echo ""
|
| 78 |
+
|
| 79 |
+
# Initialize OpenTrack (download datasets, create symlinks)
|
| 80 |
+
if [ -f "$HOME/app/samples/opentrack/init_opentrack.sh" ]; then
|
| 81 |
+
bash "$HOME/app/samples/opentrack/init_opentrack.sh"
|
| 82 |
+
fi
|
| 83 |
+
|
| 84 |
jupyter labextension disable "@jupyterlab/apputils-extension:announcements"
|
| 85 |
|
| 86 |
jupyter-lab \
|
|
|
|
| 95 |
--LabApp.news_url=None \
|
| 96 |
--LabApp.check_for_updates_class="jupyterlab.NeverCheckForUpdate" \
|
| 97 |
--notebook-dir=$NOTEBOOK_DIR \
|
| 98 |
+
--ServerApp.default_url="/data/workspaces/opentrack/opentrack_quickstart.ipynb"
|