Skip to content

R1 Autonomous Navigation System Tutorial

1. Product Introduction

The system includes mapping, localization, navigation, and control modules. The Galaxea R1 robot can build a point cloud map of the environment and use it for global localization, autonomous movement to target points, and obstacle avoidance.

2. Hardware Introduction

2.1 Performance Parameters

Localization Description
Localization Method LiDAR-based SLAM
Localization Frequency 100 Hz
Localization Accuracy <0.05 m
Motion Method Description
Control Method Autonomous Navigation (Path Tracking)
Remote control
Maximum Speed 0.6 m/s
Obstacle Avoidance Method Navigate around obstacles
Obstacle Avoidance Frequency 10-20 Hz
Network Description
Wired Network Supported
WiFi Supported

2.2 Sensor Configuration

The Galaxea R1 is equipped with various sensors, including 9 high-definition cameras and 2 LiDAR units, enabling it to perceive its surroundings in all directions and perform precise operations.

R1_FOV

Sensor Values
Camera Head: 1 x Binocular Depth Camera
Wrist: 2 x Monocular Depth Camera
Chassis: 5 x Monocular Camera
LiDar 1 x 360°
(Two LiDARs are optional)

2.2.1 Camera

Specification Head Wrist Chassis
Type Binocular Depth Camera Monocular Depth Camera Monocular Camera
Quantity 1 2 5
Output Resolution 1920 x 1080 @30fps 1280 x 720 @30FPS
(RGB 1920 x 1080)
1920 x 1080 @30fps
Field of View 110°H x 70°V x 120°D 87°H x 58°V x 95°D 118°H x 62°V
Depth Range 0.3 m ~ 20 m 0.2 m ~ 3 m \
Operating Temp. Range -10 °C ~ +45°C 0 ~ +85°C -40 ~ +85°C
Dimensions 175L x 30W x 32H mm 90L x 25W x 25H mm 30L x 30W x 23H mm
Weight 164 g 75 g <50 g

2.2.2 LiDar

The chassis is equipped with up to two 360-degree LiDARs*, which are of high precision and resistant to interference.

Units Single SOC Dual SOC
Basic Computating Capability 8 Core 2.2GHz CPU 2 * 8 Core 2.2GHz CPU
Deep Learning Computing Capability 200 TOPS 550 TOPS
Memory 1 x LPDDR5@32G 2 x LPDDR5@32G
Hard Disk 1 x SSD@1T 2 x SSD@512G
Camera 8 x GMSL 8 x GMSL (SoC-1) + 8 x GMSL (SoC-2)
Ethernet 4 x Gigabit Ethernet (M12 cable) 3 x SoC-1 Gigabit Ethernet (M12 cable) + 3 x SoC-2 Gigabit Ethernet (M12 cable)
WiFi Module M.2 Wifi with AP mode M.2 Wifi with AP mode

* One LiDAR is standard, the number of LiDAR configurations can be selected according to user needs.

3. Software Introduction

Please ensure that your environment meets the following software dependency requirements:

  1. Hardware Dependency: R1 computing unit
  2. Operating System Dependency: Ubuntu 20.04 LTS
  3. Middleware Dependency: ROS Noetic

Ensure that the R1 robot has installed the intelligence system version V1.1.0 or higher. For access, please email product@galaxea.ai or call 4008780980.

If you need to deploy the navigation system for the first time, please follow the steps below to install the required dependencies:

sudo apt install libsdl1.2-dev
sudo apt install libsdl-image1.2-dev
sudo apt install libsdl2-image-dev
sudo apt install ros-noetic-grid-map-core
sudo apt install ros-noetic-grid-map-ros
sudo apt install libgoogle-glog-dev

4. Localization and Navigation Operation Process

Map building is the foundational step for the robot's autonomous navigation. The robot records map data (bag files) via remote control, processes and builds the map on a local computer, and finally uploads the map to the specified directory on the robot to complete the deployment. By following the tutorial below, you can set the target pose, modify the target file, and run the process to achieve point-to-point navigation.

4.1 Building the Map

  1. Login to R1

    Log in to the R1 ECU via SSH.

    ssh nvidia@robot_ip
    # Enter the password  (default: nvidia)
    
  2. Start R1

    Run the following command to start the relevant nodes.

    cd work/galaxea/install/share/startup_config/script/
    ./ota_script.sh boot
    

4.1.1 Record Data Packets

Run the following command to start recording the bag file.

cd ~
rosbag record /hdas/imu_chassis /hdas/lidar_chassis_left /hdas/feedback_chassis

Control the robot to move within the space to be mapped using the Joystick Controller, ensuring that all areas that need navigation are covered. For the method of operating the robot chassis with the Joystick Controller, please click here to refer.

When the map data recording is completed, press Ctrl + C to end the recording.

Notes:

  • Please move the robot to the area to be mapped.
  • At the start of the recording, the robot must remain stationary for at least 5 seconds to ensure data quality.
  • During the data recording, ensure that there are no dynamic objects (such as moving people or objects) in the environment to avoid interference with map construction.

4.1.2 Data Transmission

  1. Confirm the Bag File Path
    On the R1 robot, confirm the path of the recorded bag file. By default, the bag file will be saved in the user's home directory (~).

    ls -lh ~/map_data.bag
    
    If the file name is different from the default, please record the actual file name.

  2. Transfer Data to Local Computer Using SCP
    Use the SCP tool to transfer the bag file from the robot to your local computer. Ensure that your local computer is connected to the same network as the robot and that the network connection is stable. In the terminal on your local computer, run the following command:

    scp nvidia@robot_ip:~/map_data.bag /local/path/to/save/
    # nvidia@robot_ip:The username and IP address of R1.
    # ~/map_data.bag:The path to the bag file on R1.
    # /local/path/to/save/:The directory on the local computer where the bag file will be saved.
    

  3. Verify File Transfer
    On your computer, check whether the file has been successfully transferred and confirm that the file size is consistent with the one on the robot.

ls -lh /local/path/to/save/map_data.bag
If the file transfer is successful, you will see the same file size as on the robot. 4. Data Transmission
Please transmit the data package to support@galaxea.ai or send it to customer service via WeChat Work.

ssh nvidia@{rorbot_ip} "mkdir -p ~/galaxea/calib ~/galaxea/maps"
scp -r ~/mapping_data/map/* nvidia@{robot_ip}:~/galaxea/maps/
scp -r ~/mapping_data/robot_calibration.json nvidia@{robot_ip}:~/galaxea/calib/

4.2 Initiating the Positioning Function

When activating the positioning function, make sure the robot is in a known map.

  1. Start the R1 node
    Execute the following commands at the R1 end to start the related nodes.

    ssh nvidia@{rorbot_ip}
    cd work/galaxea/install/share/startup_config/script/
    ./ota_script.sh boot
    

  2. Obtain Positioning Information
    Set the remote control to the chassis control mode, and operate the robot to circle within 2m in the known map environment of the robot. Ensure that the robot can successfully locate itself.

    In the R1 terminal, run the following command to check the positioning status:

        source ~/work/galaxea/install/setup.bash
        rosrun tf tf_echo map body
    

    If the following data are returned normally, the positioning is successful:

    - Translation: [3.280, -0.743, 0.008]
    - Rotation: in Quaternion [0.000, -0.004, -0.147, 0.989] # xx y z w
    

4.3 Set the Target Pose Position

  1. Remote robot to the target point
    After the positioning is successfully started, remotely control the robot to the target point you set. Ensure that the center of R1 is at least 45 cm away from obstacles.

  2. Record the pose information
    Every time the robot reaches the target point, record the robot pose information at that position.

- Translation: [3.280, -0.743, 0.008]
- Rotation: in Quaternion [0.000, -0.004, -0.147, 0.989] # x y z w
  1. Update the script for sending navigation target point information

    Repeat the above steps. After recording the pose information of all target points, update them to the navigation target point script.

    The script example is as follows:

    point_nav.py
    
    #!/usr/bin/env python
    import geometry_msg.msg
    import rospy
    from geometry_msgs.msg import PoseStamped
    from system_manager_msg.msg import TaskRequest, TaskResponse
    from geometry_msg.msg import PoseStamped
    class NavigationEngager:
        def __init__(self):
            # Initialize the ROS node
            rospy.init_node('navigation_engager', anonymous=True)
            # Initialize the position array
            self.pose_array = self.create_pose_array()
            self.current_index = 0  # Current target position index
    
            # Set up Publisher and Subscriber
            self.publisher = rospy.Publisher('/system_manager/task/navigation_enage', TaskRequest, queue_size=10)
            self.subscriber = rospy.Subscriber('/system_manager/task/response', TaskResponse, self.response_callback)
    
            # State variable
            self.engage_pending = False  # Indicates whether the current request is waiting for a response.
            rospy.loginfo("Navigation engager node initialized.")
    
            # Start sending the first "engage" message.
            rospy.sleep(1)  # Make sure that Publisher has been initialized.
            self.send_engage()
    
        def create_pose_array(self):
            """Create an array of PoseStamped objects that includes the target position."""
            pose_array = []
            pose = geometry_msg.msg.PoseStamped()
            pose.header.frame_id = "map"
            pose.pose.position.x = 3.280 
            pose.pose.position.y = -0.743
            pose.pose.position.z = 0.008
            pose.pose.orientation.x = 0.000 
            pose.pose.orientation.y = -0.004
            pose.pose.orientation.z =  -0.147
            pose.pose.orientation.w = 0.989
            pose_array.append(pose)
    
            # New target point added
            # pose = geometry_msg.msg.PoseStamped()
            # pose.header.frame_id = "map"
            # pose.pose.position.x = 5.761 
            # pose.pose.position.y = -0.146
            # pose.pose.position.z = 0.087
            # pose.pose.orientation.x = -0.008 
            # pose.pose.orientation.y = -0.001
            # pose.pose.orientation.z = 0.551
            # pose.pose.orientation.w = 0.835
            # pose_array.append(pose)
    
            return pose_array
    
        def send_engage(self):
            print("index  =", self.current_index)
            """Send the "engage" message"""
    
            if self.current_index >= len(self.pose_array):
                self.current_index = 0
            if not self.engage_pending and self.current_index < len(self.pose_array):
                msg = TaskRequest()
                # Set the fields of the message as needed
                msg.task_type = 1
                msg.navigation_task.target_pose = self.pose_array[self.current_index]
                self.publisher.publish(msg)
                self.current_index += 1
                self.engage_pending = True  # Marked as awaiting response
    
        def response_callback(self, msg):
            """Handle the response message"""
            if self.engage_pending:
                self.engage_pending = False  # Reset the status and allow sending new engage.
                rospy.sleep(0.1)  # Control the tempo of sending.
                self.send_engage()  # Send out the new "engage" message.
    
        def run(self):
            """Run node"""
            rospy.spin()
    
    if __name__ == '__main__':
        try:
            engager = NavigationEngager()
            engager.run()
        except rospy.ROSInterruptException:
            pass
    

    You can add any number of target points. In the Python file, replace and add the following areas in the Python script.

            # Add a new target point
            # pose = geometry_msg.msg.PoseStamped()
            # pose.header.frame_id = "map"
            # pose.pose.position.x = 5.761 
            # pose.pose.position.y = -0.146
            # pose.pose.position.z = 0.087
            # pose.pose.orientation.x = -0.008 
            # pose.pose.orientation.y = -0.001
            # pose.pose.orientation.z = 0.551
            # pose.pose.orientation.w = 0.835
            # pose_array.append(pose)
    

    After the target point is set, execute the script, and the robot can achieve autonomous navigation to the target point in a loop.

    source ~/work/galaxea/install/setup.bash
    python3 point_nav.py
    

5. Software Interface

5.1 System Diagram

R1_navigation_system_diagram

5.2 Driving Interface

R1 offers multiple driver interfaces for communication and control with hardware devices.

5.2.1 Chassis Drive Interface

/motion_control/chassis_speed:It is used for the chassis status feedback ROS package, which defines multiple topics to report the status of the chassis' motors. Please refer to the " Chassis Drive Interface " in the Software Guide for more detailed information.

5.2.2 LiDAR Interface

/hdas/lidar_chassis_left:It is used for environmental perception and distance measurement, providing real-time environmental information to robots. For more detailed information, please refer to the "Lidar Interface " in the Software Guide for more detailed information.

5.2.3 IMU Interface

/hdas/imu_chassis:It is used to measure the acceleration and angular velocity of the robot, providing data support for navigation and attitude control. For more detailed information, please refer to the " IMU Interface " in the Software Guide for more detailed information.

5.3 Motion Control Interface

R1 offers multiple motion control interfaces for achieving precise control of the robot's movements. Below are the main motion control interfaces and their descriptions.

5.3.1 Chassis Control Interface

/motion_target/target_speed_chassis:It is used to control the movement of the robot chassis, including speed control, direction control, etc. Please refer to the "Chassis Control Interface" in the Software Guide for more detailed information.

5.4 Localization Interface

It is used for the R1 robot to achieve autonomous navigation and environmental perception. Through these interfaces, the robot can receive data from various sensors, such as IMU and LiDAR, thereby enabling precise multi-sensor fusion localization. These interfaces ensure that the robot can accurately perceive its own position and posture in complex environments, providing reliable data support for subsequent path planning and navigation.

Topic Name I/O Description Message Type
/hdas/imu_chassis Input IMU data, used for multi-sensor fusion positioning sensor_msgs::Imu
/hdas/lidar_chassis_left Input Multi-line LiDAR point cloud, used for positioning sensor_msgs/PointCloud2
/localization/localization_results Output SLAM Positioning Status localization_msg/LocLocalization

5.5 Navigation Interface

It is used for the R1 robot to achieve autonomous path planning and motion control. These interfaces enable the robots to conduct global and local path planning based on the input sensor data (such as LiDAR point clouds and SLAM positioning status), and output control instructions to drive the robot chassis to move. The Navigation interface not only supports obstacle avoidance but also can update the robot's motion trajectory and task status in real-time, ensuring that the robot can complete navigation tasks efficiently and safely.

Topic Name I/O Description Message Type
/hdas/lidar_chassis_left Input Multi-line LiDAR point cloud, used for obstacle avoidance sensor_msgs/PointCloud2
/localization/localization_results Input SLAM Positioning Status localization_msg/LocLocalization
/system_manager/task/request Input Navigation Task Interface system_manager_msg/TaskRequest
/nav/local_path Output The local path planned by the local path planner sensor_msgs/PointCloud2
/nav/global_path Output The global path planned by the global path planner sensor_msgs/PointCloud2
/nav/robot_global_traj Output The global trajectory of the robot's movement nav_msgs/Path
/nav/global_map Output Navigate the global cost map for planning the global path nav_msgs/OccupancyGrid
/nav/local_map Output Local cost map for navigation, used for planning local paths nav_msgs/OccupancyGrid
/nav/global_goal Output The target point for navigation reception, used for visualization geometry_msgs/PoseStamped
/motion_target/target_speed_chassis Output Navigation output controls the speed geometry_msgs/Twist
/system_manager/task/response Output The completion status of the navigation task system_manager_msg/TaskResponse

5.6 Navigation Service Interface

5.6.1 Start Navigation Service

Executing the following command to start the navigation service.

service name: /nav_service  # Used to start or shut down the navigation system 
Type: std_srvs/SetBool
For Example: rosservice call /nav_service "data: true"
      rosservice call /nav_service "data: false"

Note: The navigation service is enabled by default when the robot is started.

Note: Ensure that the R1 robot is in an environment with map data established when the navigation service is enabled, and the surrounding environment remains consistent with that during map generation to ensure the accuracy of positioning and navigation.

5.6.2 Shutting Down Navigation Service

Executing the following command to shut down the navigation service.

Service name: /nav_service  # Used to start or shut down the navigation system 
Type: std_srvs/SetBool
For Example: rosservice call /nav_service "data: true"
      rosservice call /nav_service "data: false"

5.7 System Manager

The System Manager is the core management module of the R1 robot system, responsible for coordinating and managing various tasks and services of the robot. Through System Manager, users can trigger navigation tasks, monitor the status of tasks, and receive feedback on task completion.

Topic Name I/O Description Message Type
/system_manager/task/navigation_enage Input Trigger the system_manager to construct based on the target_pose provided by the user, and send the task topic to the navigation module. system_manager_msg/TaskRequest
/system_manager/task/response Input After the navigation module has completed the navigation task, it will reply to the system_manager with a topic indicating success or failure. system_manager_msg/TaskResponse
/system_manager/task/request Output The system_manager sends the topic of the task to the navigation module. system_manager_msg/TaskRequest