Skip to content

Inference

🤖 Run tasks on real OpenArmX using a trained model.

1. Pre-inference checklist

  • Model dependencies installed (match training environment)
  • --policy.path points to pretrained_model under training outputs
  • Bimanual robot and camera pipelines are working

2. Terminal 1: launch bimanual robot

cd ~/openarmx_ws
source install/setup.bash
ros2 launch openarm_bringup openarm.bimanual.launch.py \
  control_mode:=mit \
  robot_controller:=forward_position_controller \
  use_fake_hardware:=false

3. Terminal 2: start inference

lerobot-env

HF_HUB_OFFLINE=1 lerobot-record \
      --robot.type=openarmx_follower_ros2 \
      --robot.skip_send_action=false \
      --dataset.repo_id=local/eval_your_eval_name \
      --dataset.single_task="your_task_name" \
      --dataset.num_episodes=total_eval_episodes \
      --dataset.push_to_hub=false \
      --display_data=true \
      --policy.path="/path/to/pretrained_model"

📌 Common inference parameters

  • --policy.path: pretrained model path
  • --policy.device: cuda/cpu/mps
  • --policy.dtype: inference precision
  • --policy.use_amp: automatic mixed precision
  • --policy.compile_model: enable torch.compile
  • --policy.chunk_size: action chunk length
  • --policy.n_obs_steps: observation history steps
  • --dataset.num_episodes: total episode count

⚠️ Common issues

  • Robot does not move: check --robot.skip_send_action=false
  • Model load/runtime error: verify dependency compatibility with trained checkpoint
  • Unstable motion: reduce control speed and validate with short tasks first