👁️ LFM2-VL Collection LFM2-VL is our first series of vision-language models, designed for on-device deployment. • 10 items • Updated 6 days ago • 61
InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy Paper • 2510.13778 • Published Oct 15, 2025 • 16
ShuaiYang03/instructvla_finetune_v2_xlora_freeze_head_instruction_state Robotics • Updated Sep 22, 2025 • 8 • 1
ShuaiYang03/instructvla_finetune_v2_xlora_freeze_head_instruction Robotics • Updated Sep 22, 2025 • 13 • 1
ShuaiYang03/instructvla_pretraining_v2_libero_10_wrist-image_aug Robotics • Updated Sep 18, 2025 • 6 • 1
ShuaiYang03/instructvla_pretraining_v2_libero_goal_wrist-image_aug Robotics • Updated Sep 18, 2025 • 17 • 1
ShuaiYang03/instructvla_finetune_v2_xlora_freeze_head_instruction Robotics • Updated Sep 22, 2025 • 13 • 1
ShuaiYang03/instructvla_pretraining_v2_libero_object_wrist-image_aug Robotics • Updated Sep 20, 2025 • 17
ShuaiYang03/instructvla_finetune_v2_xlora_freeze_head_instruction_state Robotics • Updated Sep 22, 2025 • 8 • 1
ShuaiYang03/instructvla_pretraining_v2_libero_10_wrist-image_aug Robotics • Updated Sep 18, 2025 • 6 • 1
ShuaiYang03/instructvla_pretraining_v2_libero_spatial_wrist-image_aug Robotics • Updated Sep 18, 2025 • 31
ShuaiYang03/instructvla_pretraining_v2_libero_goal_wrist-image_aug Robotics • Updated Sep 18, 2025 • 17 • 1
InstructVLA Collection Paper, Data and Checkpoints for ``InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation'' • 14 items • Updated Sep 17, 2025 • 1