What Is Teleoperation — and Why Does It Matter?
Teleoperation is the act of controlling a robot's movements remotely in real time. In the context of robot learning, it serves a specific and critical purpose: it lets humans transfer their own physical intelligence into a form the robot can learn from. When you teleoperate an arm to pick up an object, the arm records exactly how you moved it — every joint angle, velocity, and gripper state, at 50Hz. That recording becomes a demonstration. Enough demonstrations, and a neural network can learn to reproduce your behavior without any human in the loop.
The quality of your teleoperation directly determines the quality of your policy. Smooth, consistent, deliberate motions produce good training data. Jerky, hesitant, or inconsistent motions confuse the model. This is why you are spending a full unit on teleoperation before recording a single demonstration in Unit 4 — you need to become proficient before the data starts counting.
Teleoperation Systems Overview
Covers VR teleoperation, haptic feedback, leader-follower architectures, and latency considerations. Read before Unit 3 if you want to understand the full design space. Open in Robotics Library →
Teleoperation Session Setup
The hands-on session setup is documented in detail at hardware/openarm/data-collection. Follow that guide from the "Teleoperation Setup" section. The steps below summarize the flow:
-
Choose your teleoperation method
For this path, use the leader-follower method (recommended): a second OpenArm arm acts as the leader and you physically move it while the follower (your data-collection arm) mirrors the motion. If you only have one arm, use keyboard teleoperation with the SVRC SDK's
KeyboardTeleopclass — it is slower but it works. -
Launch the teleoperation server
With both arms connected and ROS 2 running, launch the teleoperation node:
ros2 launch openarm_teleop leader_follower.launch.py. You should see joint state mirroring in the terminal output immediately. The web UI atlocalhost:8080/teleopshows a live visualization. -
Set speed to 30% for your first session
The speed parameter in the launch file defaults to 100%. For your first session, set
speed_scale:=0.3. Slower speed gives you more time to react, reduces the chance of joint limit trips, and produces smoother demonstrations. Increase to 60–80% once you are comfortable. -
Practice the target task motion
Before recording anything, spend 20–30 minutes practicing the pick-and-place motion you will use in Unit 4. Aim for consistent start and end positions. The robot should return to the same home pose before each attempt. Consistency here is what makes your dataset learnable.
-
Run a 5-minute continuous session
Teleoperate continuously for 5 minutes without stopping, disconnecting, or triggering an error. This confirms your arm, cables, and CAN bus are stable enough for a full recording session. If the arm stops or throws an error during this test, diagnose before moving to Unit 4.
Glove Teleoperation
If you have a data glove (such as the Paxini Gen3 or Brainco glove), you can use it as a more natural teleoperation interface that captures finger-level data. This is not required for the pick-and-place demo in this path, but it unlocks dexterous manipulation tasks. Read the glove teleoperation guide →
Unit 3 Complete When...
You can teleoperate the arm continuously for 5 minutes without interruption, connection errors, or joint limit trips. The arm follows your leader arm (or keyboard inputs) smoothly. You have practiced the pick-and-place motion enough that you can execute it consistently — same start position, same end position, same grip timing — at least 8 times out of 10. That consistency is what you are taking into Unit 4.