AdilZtn commited on
Commit
63bc278
·
verified ·
1 Parent(s): 994aee0

Push model using huggingface_hub.

Browse files
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: lerobot
3
+ tags:
4
+ - robotics
5
+ - lerobot
6
+ - safetensors
7
+ pipeline_tag: robotics
8
+ ---
9
+
10
+ # RobotProcessor
11
+
12
+ ## Overview
13
+
14
+ RobotProcessor is a composable, debuggable post-processing pipeline for robot transitions in the LeRobot framework. It orchestrates an ordered collection of small, functional transforms (steps) that are executed left-to-right on each incoming `EnvTransition`.
15
+
16
+ ## Architecture
17
+
18
+ The RobotProcessor provides a modular architecture for processing robot environment transitions through a sequence of composable steps. Each step is a callable that accepts a full `EnvTransition` tuple and returns a potentially modified tuple of the same structure.
19
+
20
+ ### EnvTransition Structure
21
+
22
+ An `EnvTransition` is a 7-tuple containing:
23
+
24
+ 1. **observation**: Current state observation
25
+ 2. **action**: Action taken (can be None)
26
+ 3. **reward**: Reward received (float or None)
27
+ 4. **done**: Episode termination flag (bool or None)
28
+ 5. **truncated**: Episode truncation flag (bool or None)
29
+ 6. **info**: Additional information dictionary
30
+ 7. **complementary_data**: Extra data dictionary
31
+
32
+ ## Key Features
33
+
34
+ - **Composable Pipeline**: Chain multiple processing steps in a specific order
35
+ - **State Persistence**: Save and load processor state using SafeTensors format
36
+ - **Hugging Face Hub Integration**: Easy sharing and loading via `save_pretrained()` and `from_pretrained()`
37
+ - **Debugging Support**: Step-through functionality to inspect intermediate transformations
38
+ - **Hook System**: Before/after step hooks for additional processing or monitoring
39
+ - **Device Support**: Move tensor states to different devices (CPU/GPU)
40
+ - **Performance Profiling**: Built-in profiling to identify bottlenecks
41
+
42
+ ## Installation
43
+
44
+ Follow the [installation instructions](https://huggingface.co/docs/lerobot/installation) to install the package.
45
+
46
+ ## Usage
47
+
48
+ ### Basic Example
49
+
50
+ ```python
51
+ from lerobot.processor.pipeline import RobotProcessor
52
+ from your_steps import ObservationNormalizer, VelocityCalculator
53
+
54
+ # Create a processor with multiple steps
55
+ processor = RobotProcessor(
56
+ steps=[
57
+ ObservationNormalizer(mean=0, std=1),
58
+ VelocityCalculator(window_size=5),
59
+ ],
60
+ name="my_robot_processor",
61
+ seed=42
62
+ )
63
+
64
+ # Process a transition
65
+ obs, info = env.reset()
66
+ transition = (obs, None, 0.0, False, False, info, {})
67
+ processed_transition = processor(transition)
68
+
69
+ # Extract processed observation
70
+ processed_obs = processed_transition[0]
71
+ ```
72
+
73
+ ### Saving and Loading
74
+
75
+ ```python
76
+ # Save locally
77
+ processor.save_pretrained("./my_processor")
78
+
79
+ # Push to Hugging Face Hub
80
+ processor.push_to_hub("username/my-robot-processor")
81
+
82
+ # Load from Hub
83
+ loaded_processor = RobotProcessor.from_pretrained("username/my-robot-processor")
84
+ ```
85
+
86
+ ### Debugging with Step-Through
87
+
88
+ ```python
89
+ # Inspect intermediate results
90
+ for idx, intermediate_transition in enumerate(processor.step_through(transition)):
91
+ print(f"After step {idx}: {intermediate_transition[0]}") # Print observation
92
+ ```
93
+
94
+ ### Using Hooks
95
+
96
+ ```python
97
+ # Add monitoring hook
98
+ def log_observation(step_idx, transition):
99
+ print(f"Step {step_idx}: obs shape = {transition[0].shape}")
100
+ return None # Don't modify transition
101
+
102
+ processor.register_before_step_hook(log_observation)
103
+ ```
104
+
105
+ ## Creating Custom Steps
106
+
107
+ To create a custom processor step, implement the `ProcessorStep` protocol:
108
+
109
+ ```python
110
+ from lerobot.processor.pipeline import ProcessorStepRegistry, EnvTransition
111
+
112
+ @ProcessorStepRegistry.register("my_custom_step")
113
+ class MyCustomStep:
114
+ def __init__(self, param1=1.0):
115
+ self.param1 = param1
116
+ self.buffer = []
117
+
118
+ def __call__(self, transition: EnvTransition) -> EnvTransition:
119
+ obs, action, reward, done, truncated, info, comp_data = transition
120
+ # Process observation
121
+ processed_obs = obs * self.param1
122
+ return (processed_obs, action, reward, done, truncated, info, comp_data)
123
+
124
+ def get_config(self) -> dict:
125
+ return {"param1": self.param1}
126
+
127
+ def state_dict(self) -> dict:
128
+ # Return only torch.Tensor state
129
+ return {}
130
+
131
+ def load_state_dict(self, state: dict) -> None:
132
+ # Load tensor state
133
+ pass
134
+
135
+ def reset(self) -> None:
136
+ # Clear buffers at episode boundaries
137
+ self.buffer.clear()
138
+ ```
139
+
140
+ ## Advanced Features
141
+
142
+ ### Device Management
143
+
144
+ ```python
145
+ # Move all tensor states to GPU
146
+ processor = processor.to("cuda")
147
+
148
+ # Move to specific device
149
+ processor = processor.to(torch.device("cuda:1"))
150
+ ```
151
+
152
+ ### Performance Profiling
153
+
154
+ ```python
155
+ # Profile step execution times
156
+ profile_results = processor.profile_steps(transition, num_runs=100)
157
+ for step_name, time_ms in profile_results.items():
158
+ print(f"{step_name}: {time_ms:.3f} ms")
159
+ ```
160
+
161
+ ### Processor Slicing
162
+
163
+ ```python
164
+ # Get a single step
165
+ first_step = processor[0]
166
+
167
+ # Create a sub-processor with steps 1-3
168
+ sub_processor = processor[1:4]
169
+ ```
170
+
171
+ ## Model Card Specifications
172
+
173
+ - **Pipeline Tag**: robotics
174
+ - **Library**: lerobot
175
+ - **Format**: safetensors
176
+ - **License**: Apache 2.0
177
+
178
+ ## Limitations
179
+
180
+ - Steps must maintain the 7-tuple structure of EnvTransition
181
+ - All tensor state must be separated from configuration for proper serialization
182
+ - Steps are executed sequentially (no parallel processing within a single transition)
183
+
184
+ ## Citation
185
+
186
+ If you use RobotProcessor in your research, please cite:
187
+
188
+ ```bibtex
189
+ @misc{cadene2024lerobot,
190
+ author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascale, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
191
+ title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
192
+ howpublished = "\url{https://github.com/huggingface/lerobot}",
193
+ year = {2024}
194
+ }
195
+ ```
act_preprocessor.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "act_preprocessor",
3
+ "seed": null,
4
+ "steps": [
5
+ {
6
+ "registry_name": "normalizer_processor",
7
+ "config": {
8
+ "eps": 1e-08,
9
+ "features": {
10
+ "observation.state": {
11
+ "type": "STATE",
12
+ "shape": [
13
+ 6
14
+ ]
15
+ },
16
+ "observation.images.top": {
17
+ "type": "VISUAL",
18
+ "shape": [
19
+ 3,
20
+ 480,
21
+ 640
22
+ ]
23
+ },
24
+ "observation.images.wrist": {
25
+ "type": "VISUAL",
26
+ "shape": [
27
+ 3,
28
+ 480,
29
+ 640
30
+ ]
31
+ }
32
+ },
33
+ "norm_map": {
34
+ "VISUAL": "MEAN_STD",
35
+ "STATE": "MEAN_STD",
36
+ "ACTION": "MEAN_STD"
37
+ }
38
+ },
39
+ "state_file": "act_preprocessor_step_0_normalizer_processor.safetensors"
40
+ },
41
+ {
42
+ "registry_name": "normalizer_processor",
43
+ "config": {
44
+ "eps": 1e-08,
45
+ "features": {
46
+ "action": {
47
+ "type": "ACTION",
48
+ "shape": [
49
+ 6
50
+ ]
51
+ }
52
+ },
53
+ "norm_map": {
54
+ "VISUAL": "MEAN_STD",
55
+ "STATE": "MEAN_STD",
56
+ "ACTION": "MEAN_STD"
57
+ }
58
+ },
59
+ "state_file": "act_preprocessor_step_1_normalizer_processor.safetensors"
60
+ }
61
+ ]
62
+ }
act_preprocessor_step_0_normalizer_processor.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dded4235e025ad04eb5e66a2b378b5ae7dcdf94f03e27970e78e9f1de628d5b3
3
+ size 784
act_preprocessor_step_1_normalizer_processor.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dded4235e025ad04eb5e66a2b378b5ae7dcdf94f03e27970e78e9f1de628d5b3
3
+ size 784