Multi-GPU Training¶
⚡ Multi-GPU uses the same
lerobot-trainarguments but launched withtorchrun.
0. Common setup¶
lerobot-env
export HF_HUB_OFFLINE=1
export TRANSFORMERS_OFFLINE=1
1. ACT¶
torchrun --nproc_per_node=NUM_GPUS \
$(which lerobot-train) \
--dataset.repo_id=local/your_dataset_name \
--dataset.root=your_absolute_dataset_path \
--policy.type=act \
--policy.push_to_hub=false \
--output_dir=outputs/your_trained_model_name \
--batch_size=batch_size \
--steps=total_steps \
--log_freq=log_interval \
--save_freq=save_interval
2. SmolVLA¶
torchrun --nproc_per_node=NUM_GPUS \
$(which lerobot-train) \
--dataset.repo_id=local/your_dataset_name \
--dataset.root=your_absolute_dataset_path \
--dataset.video_backend=pyav \
--policy.path=lerobot/smolvla_base \
--policy.type=smolvla \
--batch_size=batch_size \
--steps=total_steps \
--output_dir=outputs/your_trained_model_name \
--wandb.enable=false \
--log_freq=log_interval \
--save_freq=save_interval
3. Pi0¶
torchrun --nproc_per_node=NUM_GPUS \
$(which lerobot-train) \
--dataset.repo_id=local/your_dataset_name \
--dataset.root=your_absolute_dataset_path \
--dataset.video_backend=pyav \
--policy.type=pi0 \
--policy.pretrained_path=lerobot/pi0_base \
--policy.dtype=bfloat16 \
--policy.gradient_checkpointing=false \
--batch_size=batch_size \
--steps=total_steps \
--output_dir=outputs/your_trained_model_name \
--wandb.enable=false \
--log_freq=log_interval \
--save_freq=save_interval
4. Pi0.5¶
torchrun --nproc_per_node=NUM_GPUS \
$(which lerobot-train) \
--dataset.repo_id=local/your_dataset_name \
--dataset.root=your_absolute_dataset_path \
--dataset.video_backend=pyav \
--policy.type=pi05 \
--policy.pretrained_path=lerobot/pi05_base \
--policy.dtype=bfloat16 \
--policy.gradient_checkpointing=true \
--policy.compile_model=true \
--policy.freeze_vision_encoder=false \
--policy.train_expert_only=false \
--policy.device=cuda \
--batch_size=batch_size \
--steps=total_steps \
--output_dir=outputs/your_trained_model_name \
--wandb.enable=false \
--log_freq=log_interval \
--save_freq=save_interval
5. XVLA¶
torchrun --nproc_per_node=NUM_GPUS \
$(which lerobot-train) \
--dataset.repo_id=local/your_dataset_name \
--dataset.root=your_absolute_dataset_path \
--dataset.video_backend=pyav \
--job_name=xvla_openarmx \
--policy.path=lerobot/xvla-base \
--policy.dtype=bfloat16 \
--policy.action_mode=auto \
--policy.device=cuda \
--policy.freeze_vision_encoder=false \
--policy.freeze_language_encoder=false \
--policy.train_policy_transformer=true \
--policy.train_soft_prompts=true \
--policy.push_to_hub=false \
--policy.empty_cameras=0 \
--policy.num_image_views=3 \
--rename_map='{"observation.images.cam_head": "observation.images.image", "observation.images.cam_right": "observation.images.image2", "observation.images.cam_left": "observation.images.image3"}' \
--batch_size=batch_size \
--steps=total_steps \
--output_dir=outputs/your_trained_model_name \
--wandb.enable=false \
--log_freq=log_interval \
--save_freq=save_interval
🔧 Multi-GPU recommendations¶
- Set
--nproc_per_nodeto available GPU count - Global batch is usually
single_gpu_batch x num_gpus; scale learning rate if needed - First run short steps to validate DDP, then run full training
- For NCCL errors, check CUDA/NCCL versions and network settings