Handheld SLAM Design Based on the Sunrise X3 Development Board (Part One)

1、 Project introduction, hardware selection guide, environment configuration We have designed and implemented a SLAM handheld mapping device using the Xuri X3pai development board. It utilizes the computing performance of the Xuri X3pai development board, implements a pure laser odometer through LiDAR, corrects laser data distortion and SLAM posture through IMU, and achieves a handheld SLAM mapping function.

Recently, we have been working on SLAM mapping design for some drones, which intelligently utilizes Spring LiDAR to construct maps without calculating wheel mileage. I also tried Hector algorithm before, but the effect was not very ideal. After borrowing the code from Lio Sam, I came up with this design. This design uses IMU to first correct point cloud distortion on LiDAR data, extract feature points from the processed point cloud data, and then use PL-ICP to calculate a Laser Odometry. On the premise of maintaining TF well, using this Laser Odometry+some commonly used SLAM feature packages (such as Gmapping) can achieve handheld mapping function.

Without further ado, let’s first watch the feature demonstration (the demonstration video of a single function is at the end of the corresponding chapter)

The hardware mainly uses the Sunrise x3 development board, Lakibeam 1 LiDAR, and JY61 gyroscope module, followed by the power module, 12V lithium battery, PCB board, acrylic structure, and handle structure.

The Xuri X3 development board is a development board based on ARM architecture, using the Cortex-A53 architecture of the Allwinner H3 processor, with a main frequency of up to 1.2GHz. This development board has multiple interfaces and extension interfaces, which can meet the needs of different application scenarios. At the same time, it supports Android and Linux operating systems and can be used in fields such as smart homes, smart Internet of Things, and embedded development. This development board is equipped with 1GB of DDR3 memory and 8GB of eMMC storage, and also supports TF card expansion storage. In addition, the development board also supports WiFi and Bluetooth functions, making it convenient for developers to communicate wirelessly.

Lakibeam 1 LiDAR is a high-precision LiDAR that uses solid-state lasers and high-speed scanning technology to obtain real-time 3D point cloud data of the surrounding environment, and has the characteristics of high precision, high resolution, and high reliability. The Lakibeam 1 LiDAR also has the advantages of being compact, lightweight, low-power, and easy to integrate, which can meet the application needs in various complex environments.

The JY61 gyroscope module is a sensor module that integrates accelerometers, gyroscopes, and magnetometers, capable of measuring data such as acceleration, angular velocity, and direction of objects. This module adopts MEMS technology, which has the characteristics of high precision, high stability, low power consumption, and small size, and has a wide range of applications. It adopts I2C or serial communication mode, which can be connected and communicated with microprocessors or microcontrollers, making it convenient for users to process and control data. At the same time, the module also supports multiple data output formats, including raw data, angle, angular velocity, acceleration, etc. Users can choose the appropriate output format according to their own needs. In applications, the JY61 gyroscope module can be used for attitude control, navigation, motion control, and other aspects. For example, in robots, the JY61 gyroscope module can be used to measure the tilt angle and motion status of the robot, thereby achieving automatic navigation and control of the robot; In autonomous vehicles, the JY61 gyroscope module can be used to measure the acceleration and angular velocity of the vehicle, thereby achieving stable control and autonomous driving of the vehicle.

The following is the hardware framework diagram for this project.

The entire system is powered by a 12V lithium battery. Realize 12V to 5V 8A (Max) power supply for the Xuri X3 development board through the XL4016E1 power module. The LiDAR uses Zhichi’s Lakibeam 1 single line LiDAR and is connected to the Xuri X3 development board via Ethernet. The gyroscope uses Vite Intelligence’s JY61 module (which has been discontinued and can be replaced by JY60/JY61P/JY901, etc.), and is connected to the system through the serial port function of the Xuri X3 Pai IO pin (the serial port number is/dev/ttyS3).

I have briefly designed the PCB and structural parts here, as shown in the following figure.

Handle section (3D printing)

I am using the Ubuntu 20.04 Desktop version of Xuri X3 Pi here. Next, let’s configure the ROS environment. (ROS environment configuration steps, including source swapping, installation of ROS, execution of ROSDep, creation of workspace, configuration of environment variables, and direct replication of CSDN, etc.)

Before burning the Ubuntu system image, users need to make the following preparations:

·Prepare a Micro SD card with at least 8GB capacity ·SD card reader ·Download the Ubuntu image compression package and extract the Ubuntu system image file: systemi_sdcard.img ·Download the image burning tool balenaEtcher, download link: https://www.balena.io/etcher/ ·Obtain the Ubuntu image file of the development board from the resource center, burn it to the prepared SD card, extract the image file and download balenaEtcher, and open the image file with balenaEtcher

The next step is to create a workspace and feature package. The code used in this project has been uploaded to Gitee, and the download link is: https://gitee.com/zhao-yuanqi/sunrise\_slam.git

2、 Design of IMU driver function package and IMU data visualization
Imu_driver gyroscope function package: Obtain data through the WT61CTTL six axis gyroscope module and publish Imu data.

Function:

By reading the acceleration, angular velocity, and Euler angle of the WT61C TTL sensor, IMU data is published and visualized under RVIZ.

The detailed composition of the feature pack is as follows:

Cfg: Configuration file

Store the startup configuration file imu_display.rviz for RVIZ, and use the content I have configured directly when starting RVIZ for testing.

Include: No content, no need to consider.

Launch: Start file

Imu_driver.launch starts two nodes, among which the WT61CTTL node is responsible for obtaining data from IMU sensors and providing calibration services;

Msg: Topic type file

WT61CTTL.msg stores the data types emitted by the WT61C TTL sensor, which contains 9 float64 type data. This is the raw data of the sensor.

float64 x_acc  
float64 y_acc  
float64 z_acc   
float64 x_gyro  
float64 y_gyro 
float64 z_gyro 
float64 roll   
float64 pitch   
float64 yaw    

SRV: Service Type File The setAccZero.srv service is responsible for calibrating acceleration data, and the content is as follows:

---
int32 status

Pass in any parameter and return the calibration result.

The setYawZero.srv service is responsible for calibrating heading angle data, and the content is as follows:

---
int32 status

Pass in any parameter and return the calibration result.

As the running time increases, the acceleration value may drift, resulting in significant errors. At this point, the robot is placed in a stationary state and the service is called to calibrate the acceleration value.

The heading angle number is also calculated through mathematical integration, and as time increases, there will be errors. This service is called to set the Yaw value to 0.

Scripts: Script files Demo.py is an example program provided by the manufacturer. Please refer to it for details

WT61CTTL.py was modified by me based on the example program, please refer to the detailed comments below for details.

#!/usr/bin/env python3
#coding:utf-8

​
import rospy    
from imu_driver.msg import WT61CTTL 
from imu_driver.srv import *    
import serial  
​
ACCData=[0.0]*8
GYROData=[0.0]*8
AngleData=[0.0]*8          
FrameState = 0           
Bytenum = 0               
CheckSum = 0                  
​
a = [0.0]*3 
w = [0.0]*3 
Angle = [0.0]*3 
​
def DueData(inputdata):   
    global  FrameState   
    global  Bytenum
    global  CheckSum
    global  a
    global  w
    global  Angle
    
    for data in inputdata:  
        if data==0x55 and Bytenum==0: 
            CheckSum=data
            Bytenum=1
            continue
        
        elif data==0x51 and Bytenum==1:
            CheckSum+=data
            FrameState=1
            Bytenum=2
        
        elif data==0x52 and Bytenum==1:
            CheckSum+=data
            FrameState=2
            Bytenum=2
        
        elif data==0x53 and Bytenum==1:
            CheckSum+=data
            FrameState=3
            Bytenum=2
        
        elif FrameState==1: # acc              
            if Bytenum<10:          
                ACCData[Bytenum-2]=data 
                CheckSum+=data
                Bytenum+=1
            else:
                if data == (CheckSum&0xff): 
                    a = get_acc(ACCData)
                CheckSum=0          
                Bytenum=0
                FrameState=0
        
        elif FrameState==2: # gyro   
            if Bytenum<10:
                GYROData[Bytenum-2]=data
                CheckSum+=data
                Bytenum+=1
            else:
                if data == (CheckSum&0xff):
                    w = get_gyro(GYROData)
                CheckSum=0
                Bytenum=0
                FrameState=0
​
        elif FrameState==3: # angle            
            if Bytenum<10:
                AngleData[Bytenum-2]=data
                CheckSum+=data
                Bytenum+=1
            else:
                if data == (CheckSum&0xff):
                    Angle = get_angle(AngleData)
                    d = list(a)+list(w)+list(Angle)
                    #print("a(g):%10.3f %10.3f %10.3f w(deg/s):%10.3f %10.3f %10.3f Angle(deg):%10.3f %10.3f %10.3f"%d)
                    
                    jhr_imu = WT61CTTL()  
        
                    jhr_imu.x_acc = d[0]
                    jhr_imu.y_acc = d[1]  
                    jhr_imu.z_acc = d[2]
                    jhr_imu.x_gyro = d[3] 
                    jhr_imu.y_gyro = d[4]
                    jhr_imu.z_gyro = d[5]
                    jhr_imu.roll = d[6]
                    jhr_imu.pitch = d[7]
                    jhr_imu.yaw = d[8]
        
                    wt61cttlpub.publish(jhr_imu)
                    rate.sleep()    
​
                CheckSum=0
                Bytenum=0
                FrameState=0
​
​

def get_acc(datahex):  
    axl = datahex[0]                                        
    axh = datahex[1]
    ayl = datahex[2]                                        
    ayh = datahex[3]
    azl = datahex[4]                                        
    azh = datahex[5]
    
    k_acc = 16.0
 
    acc_x = (axh << 8 | axl) / 32768.0 * k_acc
    acc_y = (ayh << 8 | ayl) / 32768.0 * k_acc
    acc_z = (azh << 8 | azl) / 32768.0 * k_acc
    
    if acc_x >= k_acc:
        acc_x -= 2 * k_acc
    if acc_y >= k_acc:
        acc_y -= 2 * k_acc
    if acc_z >= k_acc:
        acc_z-= 2 * k_acc
        
    return acc_x,acc_y,acc_z
 
 
def get_gyro(datahex):                                      
    wxl = datahex[0]                                        
    wxh = datahex[1]
    wyl = datahex[2]                                        
    wyh = datahex[3]
    wzl = datahex[4]                                        
    wzh = datahex[5]
    k_gyro = 2000.0
 
    gyro_x = (wxh << 8 | wxl) / 32768.0 * k_gyro
    gyro_y = (wyh << 8 | wyl) / 32768.0 * k_gyro
    gyro_z = (wzh << 8 | wzl) / 32768.0 * k_gyro
​
    if gyro_x >= k_gyro:
        gyro_x -= 2 * k_gyro
    if gyro_y >= k_gyro:
        gyro_y -= 2 * k_gyro
    if gyro_z >=k_gyro:
        gyro_z-= 2 * k_gyro
        
    return gyro_x,gyro_y,gyro_z
 
 
def get_angle(datahex):                                 
    rxl = datahex[0]                                        
    rxh = datahex[1]
    ryl = datahex[2]                                        
    ryh = datahex[3]
    rzl = datahex[4]                                        
    rzh = datahex[5]
    k_angle = 180.0
 
    angle_x = (rxh << 8 | rxl) / 32768.0 * k_angle
    angle_y = (ryh << 8 | ryl) / 32768.0 * k_angle
    angle_z = (rzh << 8 | rzl) / 32768.0 * k_angle
​
    if angle_x >= k_angle:
        angle_x -= 2 * k_angle
    if angle_y >= k_angle:
        angle_y -= 2 * k_angle
    if angle_z >=k_angle:
        angle_z-= 2 * k_angle
         
    return angle_x,angle_y,angle_z
​
def set_acc_zero(req):
    ser.write(b'\xFF\xAA\x67')
    status = 0
    return setAccZeroResponse(status)
​
def set_yaw_zero(req):
    ser.write(b'\xFF\xAA\x52')
    status = 0
    return setYawZeroResponse(status)
​
if __name__=='__main__': 
    rospy.init_node('WT61CTTL')
​
    wt61cttlpub = rospy.Publisher('WT61CTTL', WT61CTTL, queue_size=10)
    setacczero = rospy.Service('set_acc_zero', setAccZero, set_acc_zero)
    setyawzero = rospy.Service('set_yaw_zero', setYawZero, set_yaw_zero)
​
    rate = rospy.Rate(100)
    
    ser = serial.Serial('/dev/ttyUSB0',115200, timeout=0.5)
    print(ser.is_open)
    while not rospy.is_shutdown():
        datahex = ser.read(33)
        DueData(datahex)

Calibration is completed by initiating a service request.

Imu’node.py is the data subscribed to WT61CTTL, published in sensor_msgs/IMU format. Please refer to the detailed comments below for details.

import rospy    #导入ros相关文件
import string   
import math     
import time
import sys
​
from tf.transformations import quaternion_from_euler
from std_msgs.msg import Float32
from sensor_msgs.msg import Imu
from imu_driver.msg import WT61CTTL
​
degrees2rad = math.pi/180.0
imu_yaw_calibration = 0.0
accel_factor = 9.806/256.0
​
seq = 0
​
def WT61CTTL_callback(data):
    yaw_deg = data.yaw
    yaw_deg = yaw_deg + imu_yaw_calibration
​
    yaw = yaw_deg*degrees2rad    
    pitch = float(data.pitch)*degrees2rad
    roll  = float(data.roll)*degrees2rad
​
    imuMsg.linear_acceleration.x = float(data.x_acc)*accel_factor
    imuMsg.linear_acceleration.y = float(data.y_acc)*accel_factor
    imuMsg.linear_acceleration.z = float(data.z_acc)*accel_factor
​
    imuMsg.angular_velocity.x = float(data.x_gyro)*degrees2rad
    imuMsg.angular_velocity.y = float(data.y_gyro)*degrees2rad
    imuMsg.angular_velocity.z = float(data.z_gyro)*degrees2rad
​
    q = quaternion_from_euler(roll, pitch, yaw)
    imuMsg.orientation.x = q[0]
    imuMsg.orientation.y = q[1]
    imuMsg.orientation.z = q[2]
    imuMsg.orientation.w = q[3]
​
    # Orientation covariance estimation
    imuMsg.orientation_covariance = [
    0.0025 , 0 , 0,
    0, 0.0025, 0,
    0, 0, 0.0025
    ]
​
    # Angular velocity covariance estimation
    imuMsg.angular_velocity_covariance = [
    0.02, 0 , 0,
    0 , 0.02, 0,
    0 , 0 , 0.02
    ]
​
    # linear acceleration covariance estimation
    imuMsg.linear_acceleration_covariance = [
    0.04 , 0 , 0,
    0 , 0.04, 0,
    0 , 0 , 0.04
    ]

​
    global seq
    imuMsg.header.stamp= rospy.Time.now()
    imuMsg.header.frame_id = "base_imu_link"
    imuMsg.header.seq = seq
    seq = seq + 1
    
    data_pub.publish(imuMsg)
​
    rate.sleep()
​
rospy.init_node("imu_node")
data_pub = rospy.Publisher("imu_data", Imu, queue_size=1)
rospy.Subscriber('WT61CTTL', WT61CTTL, WT61CTTL_callback)
imuMsg = Imu() 
​
rate = rospy.Rate(100)
rospy.spin()

The above two parts are the core code. WT61CTTL obtains data to imu_data, which processes it and publishes it in Imu format.

I wrote the calibration function in WT61CTTL because calibration is achieved by sending data through a serial port. WT61CTTL can be understood as the driving function package of IMU sensors, while imu_data is the processing function package of IMU sensor data. Please pay attention to what you need to modify during post maintenance, do not modify everything at the same time!!!

Read the content of CMakeList.txt and packahe.ml on your own. Legacy bugs: The IMU will experience a data delay of 2 seconds after about 20 minutes of startup. Recalibrating the acceleration is sufficient. A delay of 2 seconds does not affect usage, but we hope to find a solution.

There is an error in the Yaw (Z-axis) value of IMU after startup, and the error value is very obvious over time, especially during intense movements. It is speculated that it is due to the loss of integration calculation, and we hope to find a solution.

3、 Configuration and use of LiDAR, visualization of LiDAR data, and simple processing of LiDAR data
LakiBeam establishes a connection with the computer through a network cable and power supply. The default network configuration of the product is static mode, and the IP address of the device end is set to 192.168.198.2 (Device end) at the factory. The IP address of the Ethernet port connecting the radar and the computer needs to be manually changed to 192.168.198.1 (Host end) first, and then accessed through the computer’s web browser http://192.168.198.2 You can access the built-in web services on the device.

The dashboard page provides system status monitoring (including CPU usage, system load, memory usage, Ethernet data rate, and system running time), radar status monitoring (including input voltage, input current, system voltage, core temperature, APD bias voltage, motor speed, and laser ranging status), network card information, and device information.

The LiDAR Configuration page provides radar scanning related configurations (including scanning frequency settings, start and stop of ranging); Data processing configuration (including data output range settings and data filtering settings); Data transmission target configuration (including the IP address and data port of the target host); Network configuration (including mode switching between DHCP and static IP, and setting of static IP). It should be noted that after changing the network configuration, the device must be restarted, and the new network configuration information will be automatically applied on the next startup.

After configuring the radar IP and local IP, you can start the Lakibaem 1 LiDAR with the following command.

 roslaunch lakibeam1 lakibeam1_scan.launch

The data type of LiDAR is sensor_msgs/LaserScan, and its data format is as follows.

Under ROS, most sensor data comes with a header, which includes timestamps and coordinate systems, ranges arrays stored in radar data, and intensities data. This is a coordinate description using angle+distance - polar coordinate system. The rest are radar stored parameters.

Here we will make a simple demo, subscribe to the secretary information of the radar, and convert the description of this polar coordinate into a description in spatial Cartesian coordinates.

#include <ros/ros.h>
#include <sensor_msgs/LaserScan.h>
​
class LaserScan
{
private:
    ros::NodeHandle node_handle_;
    ros::NodeHandle private_node_;
    ros::Subscriber laser_scan_subscriber_; 
​
public:
    LaserScan();
    ~LaserScan();
    void ScanCallback(const sensor_msgs::LaserScan::ConstPtr &scan_msg);
};
​
LaserScan::LaserScan() : private_node_("~")
{
    ROS_INFO_STREAM("LaserScan initial.");
    laser_scan_subscriber_ = node_handle_.subscribe("scan", 1, &LaserScan::ScanCallback, this);
}
​
LaserScan::~LaserScan()
{
}
​
void LaserScan::ScanCallback(const sensor_msgs::LaserScan::ConstPtr &scan_msg)
{
    ROS_INFO_STREAM(
        "seqence: " << scan_msg->header.seq << 
        ", time stamp: " << scan_msg->header.stamp << 
        ", frame_id: " << scan_msg->header.frame_id << 
        ", angle_min: " << scan_msg->angle_min << 
        ", angle_max: " << scan_msg->angle_max << 
        ", angle_increment: " << scan_msg->angle_increment << 
        ", time_increment: " << scan_msg->time_increment << 
        ", scan_time: " << scan_msg->scan_time << 
        ", range_min: " << scan_msg->range_min << 
        ", range_max: " << scan_msg->range_max << 
        ", range size: " << scan_msg->ranges.size() << 
        ", intensities size: " << scan_msg->intensities.size());
​
    double range = scan_msg->ranges[4];
    double angle = scan_msg->angle_min + scan_msg->angle_increment * 4;
    double x = range * cos(angle);
    double y = range * sin(angle);
​
    ROS_INFO_STREAM( 
        "range = " << range << ", angle = " << angle << 
        ", x = " << x << ", y = " << y
    );
}
​
int main(int argc, char **argv)
{
    ros::init(argc, argv, "lesson1_laser_scan_node"); 
    LaserScan laser_scan;
​
    ros::spin();
    return 0;
}

We have subscribed to the data of LiDAR here, converting the subscribed polar coordinate data into coaching coordinate data and outputting it.

4、 Front end, backend, and loop back of SLAM

What is SLAM?

SLAM (Simultaneous Localization and Mapping) is a technology that simultaneously performs localization and map building. It collects environmental information by using sensors (such as LiDAR, cameras, inertial measurement units, etc.), estimates the robot’s position and posture in the environment, and constructs a map of the environment.

How to implement a SLAM?

Implementing SLAM requires the following steps:

1. Choose appropriate sensors: LiDAR, cameras, inertial measurement units, and other sensors can be used to collect environmental information.

2. Data preprocessing: Perform preprocessing operations such as calibration, filtering, and noise reduction on sensor data to improve data quality.

3. Feature extraction: Extracting feature points or descriptors from sensor data for subsequent map construction and localization.

4. Building a map: Based on sensor data and feature points, use SLAM algorithm to construct an environmental map.

5. Robot positioning: Based on sensor data and map information, use SLAM algorithm to estimate the robot's position and posture in the environment.

6. Map update: When the robot moves, it is necessary to update the estimated values of the map and robot position.

7. Fine tuning: Fine tuning the SLAM algorithm to improve the accuracy of localization and map construction.

The commonly used SLAM algorithms include Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), Particle Filter (PF), and Graph Optimization based Methods (such as ORB-SLAM, LSD-SLAM, etc.).

The commonly used SLAM feature packages include Gmapping, Hector, Cartographer, RtabMap, etc.

What are the front-end, back-end, and loopback of SLAM? What is its purpose?

The SLAM system mainly consists of three parts: front-end, back-end, and loop back detection.

  1. Front end: The front end is the data processing part of the SLAM system, whose main task is to extract feature points or feature descriptors from sensor data, and estimate the robot’s motion trajectory and map feature point positions by matching feature points. The front-end typically includes modules such as feature extraction, feature matching, motion estimation, and map construction.
  2. Backend: The backend is the data optimization part of the SLAM system, whose main task is to optimize the position of the motion trajectory and map feature points extracted by the front-end to generate consistent map and robot state estimation results. Backend optimization algorithms are commonly used, such as Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), Particle Filter (PF), and Graph Based Optimization (such as ORB-SLAM, LSD-SLAM, etc.).
  3. Loop detection: Loop detection is a key part of the SLAM system, whose main task is to detect whether the robot has passed through the previous position to avoid the accumulation of errors. Loop detection typically uses graph optimization methods, such as pose graphs and factor graphs.

In SLAM systems, the front-end and back-end are in a collaborative relationship. The map features extracted by the front-end and the robot motion trajectory are used as optimization variables in the back-end, and the back-end optimizes these variables to obtain more accurate map and robot state estimation results. Loop detection is performed regularly during system operation to avoid the accumulation of errors.

5、 Design a pure radar odometer using PL-ICP

Odometry is a method of estimating the robot’s motion trajectory using sensor data from the robot itself. Mileage meters are usually based on the kinematic model of robots, calculating the pose changes of robots by estimating their speed and direction. The odometer is mainly used for autonomous navigation and positioning of robots and is an important component of SLAM systems.

Mileage meters typically use sensors such as encoders or inertial measurement units (IMUs) on the robot chassis to measure the robot’s motion information. Encoders can measure the rotation angle and speed of robot wheels, thereby calculating the robot’s motion trajectory. IMU can measure the acceleration and angular velocity of robots, thereby calculating their posture and motion trajectory. In the SLAM system, odometry can be used to initialize the position and attitude of robots, or as the front-end part of the SLAM system to estimate the robot’s motion trajectory and the position of map feature points.

The accuracy of odometry is influenced by various factors, such as the resolution of the encoder, the sliding of the wheels, and the imperfect motion model of the robot. Therefore, in practical applications, odometers usually need to be combined with other sensors, such as LiDAR, cameras, etc., to improve the accuracy of positioning and map construction.

Lidar Odometry is a method of calculating the robot’s motion trajectory by measuring the distance information of the robot’s surrounding environment using LiDAR. Radar odometry typically uses a rotating LiDAR installed on the robot, which constructs a three-dimensional point cloud map of the robot’s surroundings by measuring the distance between the laser beam and surrounding objects, and calculates the robot’s motion trajectory.

The basic principle of a radar odometer is to calculate the robot’s motion trajectory and posture by comparing the point cloud data between two LiDAR scans. Specifically, by matching the point cloud data between two scans, the displacement and rotation of the robot between the two scans can be calculated, thereby calculating the robot’s motion trajectory. Radar odometers typically use the ICP (Iterative Closest Point) algorithm or its improved version for point cloud data matching to improve the accuracy and speed of matching.

Compared to traditional odometers, radar odometers can be used for positioning and map construction in a wider environment, as LiDAR can measure objects at longer distances. Meanwhile, LiDAR can provide more accurate environmental information, thereby improving the accuracy of positioning and map construction. However, correspondingly, radar odometers require a large amount of computation and require high computing power and storage space.

The ICP (Iterative Closest Point) algorithm is a commonly used point cloud data registration algorithm, commonly used for point cloud data matching in radar odometry. The ICP algorithm achieves point cloud registration by minimizing the distance error between two point clouds, thereby calculating the robot’s motion trajectory and posture.

The basic process of ICP algorithm is as follows:

1. Initialization: Align two point clouds, select one point cloud as the reference point cloud, and the other point cloud as the point cloud to be matched.

2. Feature extraction: Extract feature points or descriptors from the reference point cloud and the matching point cloud for subsequent point cloud matching.

3. Matching: Match each point in the point cloud to the nearest neighbor point in the reference point cloud, and calculate the distance error between the two point clouds.

4. Iterative optimization: Optimize point cloud registration by minimizing distance error, update the current pose estimation value, and repeat steps 3 and 4 until convergence.

5. Termination condition: Stop iteration when the distance error is less than the preset threshold or reaches the maximum number of iterations.

There are various optimization methods for ICP algorithm, such as point-to-point and point-to-plane. The point-to-point method usually calculates the Euclidean distance error between two point clouds, while the point-to-plane method uses normal vector information to calculate the distance error between points and planes, which can improve the accuracy of matching.

The PL-ICP (Point to Line Iterative Closest Point) algorithm is an improved ICP algorithm mainly used for point cloud data registration and matching. Compared with the traditional ICP algorithm, the PL-ICP algorithm uses a distance metric from points to lines to calculate the distance error between point clouds, thereby improving the accuracy of matching.

The basic process of the PL-ICP algorithm is as follows:

1. Initialization: Align two point clouds, select one point cloud as the reference point cloud, and the other point cloud as the point cloud to be matched.

2. Feature extraction: Extract feature points or descriptors from the reference point cloud and the matching point cloud for subsequent point cloud matching.

3. Matching: Match each point in the point cloud to the nearest neighbor point in the reference point cloud, and calculate the distance error from the point to the line.

4. Iterative optimization: Optimize point cloud registration by minimizing the distance error between points and lines, update the current pose estimation value, and repeat steps 3 and 4 until convergence.

5. Termination condition: Stop iteration when the distance error is less than the preset threshold or reaches the maximum number of iterations.

The advantage of PL-ICP algorithm is that it can better handle noise and local deformation in point clouds, thereby improving the accuracy and robustness of matching. Meanwhile, the PL-ICP algorithm also has high computational efficiency and can be applied in real-time systems. The PL-ICP algorithm is widely used in fields such as robot navigation, autonomous driving, and 3D reconstruction.

Here we use Lakibeam 1 LiDAR to implement a pure laser odometer based on the PL-ICP algorithm. The specific code is as follows.#include “lesson3/plicp_odometry.h”

#include “tf2_geometry_msgs/tf2_geometry_msgs.h”

ScanMatchPLICP::ScanMatchPLICP() : private_node_(“~”), tf_listener_(tfBuffer_)

{

ROS_INFO_STREAM(“\033[1;32m----> PLICP odometry started.\033[0m”);

laser_scan_subscriber_ = node_handle_.subscribe(

“scan”, 1, &ScanMatchPLICP::ScanCallback, this);

odom_publisher_ = node_handle_.advertise<nav_msgs::Odometry>(“odom”, 50);

InitParams();

scan_count_ = 0;

initialized_ = false;

base_in_odom_.setIdentity();

base_in_odom_keyframe_.setIdentity();

input_.laser[0] = 0.0;

input_.laser[1] = 0.0;

input_.laser[2] = 0.0;

// Initialize output_ vectors as Null for error-checking

output_.cov_x_m = 0;

output_.dx_dy1_m = 0;

output_.dx_dy2_m = 0;

}

ScanMatchPLICP::~ScanMatchPLICP()

{

}

void ScanMatchPLICP::InitParams()

{

private_node_.paramstd::string(“odom_frame”, odom_frame_, “odom”);

private_node_.paramstd::string(“base_frame”, base_frame_, “base_link”);

// **** keyframe params: when to generate the keyframe scan

// if either is set to 0, reduces to frame-to-frame matching

private_node_.param(“kf_dist_linear”, kf_dist_linear_, 0.1);

private_node_.param(“kf_dist_angular”, kf_dist_angular_, 5.0 * (M_PI / 180.0));

kf_dist_linear_sq_ = kf_dist_linear_ * kf_dist_linear_;

private_node_.param(“kf_scan_count”, kf_scan_count_, 10);

// comments copied from algos.h (by Andrea Censi)

// Maximum angular displacement between scans

if (!private_node_.getParam(“max_angular_correction_deg”, input_.max_angular_correction_deg))

input_.max_angular_correction_deg = 45.0;

// Maximum translation between scans (m)

if (!private_node_.getParam(“max_linear_correction”, input_.max_linear_correction))

input_.max_linear_correction = 1.0;

// Maximum ICP cycle iterations

if (!private_node_.getParam(“max_iterations”, input_.max_iterations))

input_.max_iterations = 10;

// A threshold for stopping (m)

if (!private_node_.getParam(“epsilon_xy”, input_.epsilon_xy))

input_.epsilon_xy = 0.000001;

// A threshold for stopping (rad)

if (!private_node_.getParam(“epsilon_theta”, input_.epsilon_theta))

input_.epsilon_theta = 0.000001;

// Maximum distance for a correspondence to be valid

if (!private_node_.getParam(“max_correspondence_dist”, input_.max_correspondence_dist))

input_.max_correspondence_dist = 1.0;

// Noise in the scan (m)

if (!private_node_.getParam(“sigma”, input_.sigma))

input_.sigma = 0.010;

// Use smart tricks for finding correspondences.

if (!private_node_.getParam(“use_corr_tricks”, input_.use_corr_tricks))

input_.use_corr_tricks = 1;

// Restart: Restart if error is over threshold

if (!private_node_.getParam(“restart”, input_.restart))

input_.restart = 0;

// Restart: Threshold for restarting

if (!private_node_.getParam(“restart_threshold_mean_error”, input_.restart_threshold_mean_error))

input_.restart_threshold_mean_error = 0.01;

// Restart: displacement for restarting. (m)

if (!private_node_.getParam(“restart_dt”, input_.restart_dt))

input_.restart_dt = 1.0;

// Restart: displacement for restarting. (rad)

if (!private_node_.getParam(“restart_dtheta”, input_.restart_dtheta))

input_.restart_dtheta = 0.1;

// Max distance for staying in the same clustering

if (!private_node_.getParam(“clustering_threshold”, input_.clustering_threshold))

input_.clustering_threshold = 0.25;

// Number of neighbour rays used to estimate the orientation

if (!private_node_.getParam(“orientation_neighbourhood”, input_.orientation_neighbourhood))

input_.orientation_neighbourhood = 20;

// If 0, it’s vanilla ICP

if (!private_node_.getParam(“use_point_to_line_distance”, input_.use_point_to_line_distance))

input_.use_point_to_line_distance = 1;

// Discard correspondences based on the angles

if (!private_node_.getParam(“do_alpha_test”, input_.do_alpha_test))

input_.do_alpha_test = 0;

// Discard correspondences based on the angles - threshold angle, in degrees

if (!private_node_.getParam(“do_alpha_test_thresholdDeg”, input_.do_alpha_test_thresholdDeg))

input_.do_alpha_test_thresholdDeg = 20.0;

// Percentage of correspondences to consider: if 0.9,

// always discard the top 10% of correspondences with more error

if (!private_node_.getParam(“outliers_maxPerc”, input_.outliers_maxPerc))

input_.outliers_maxPerc = 0.90;

// Parameters describing a simple adaptive algorithm for discarding.

// 1) Order the errors.

// 2) Choose the percentile according to outliers_adaptive_order.

// (if it is 0.7, get the 70% percentile)

// 3) Define an adaptive threshold multiplying outliers_adaptive_mult

// with the value of the error at the chosen percentile.

// 4) Discard correspondences over the threshold.

// This is useful to be conservative; yet remove the biggest errors.

if (!private_node_.getParam(“outliers_adaptive_order”, input_.outliers_adaptive_order))

input_.outliers_adaptive_order = 0.7;

if (!private_node_.getParam(“outliers_adaptive_mult”, input_.outliers_adaptive_mult))

input_.outliers_adaptive_mult = 2.0;

// If you already have a guess of the solution, you can compute the polar angle

// of the points of one scan in the new position. If the polar angle is not a monotone

// function of the readings index, it means that the surface is not visible in the

// next position. If it is not visible, then we don’t use it for matching.

if (!private_node_.getParam(“do_visibility_test”, input_.do_visibility_test))

input_.do_visibility_test = 0;

// no two points in laser_sens can have the same corr.

if (!private_node_.getParam(“outliers_remove_doubles”, input_.outliers_remove_doubles))

input_.outliers_remove_doubles = 1;

// If 1, computes the covariance of ICP using the method http://purl.org/censi/2006/icpcov

if (!private_node_.getParam(“do_compute_covariance”, input_.do_compute_covariance))

input_.do_compute_covariance = 0;

// Checks that find_correspondences_tricks gives the right answer

if (!private_node_.getParam(“debug_verify_tricks”, input_.debug_verify_tricks))

input_.debug_verify_tricks = 0;

// If 1, the field ‘true_alpha’ (or ‘alpha’) in the first scan is used to compute the

// incidence beta, and the factor (1/cos^2(beta)) used to weight the correspondence.");

if (!private_node_.getParam(“use_ml_weights”, input_.use_ml_weights))

input_.use_ml_weights = 0;

// If 1, the field ‘readings_sigma’ in the second scan is used to weight the

// correspondence by 1/sigma^2

if (!private_node_.getParam(“use_sigma_weights”, input_.use_sigma_weights))

input_.use_sigma_weights = 0;

}

void ScanMatchPLICP::ScanCallback(const sensor_msgs::LaserScan::ConstPtr &scan_msg)

{

current_time_ = scan_msg->header.stamp;

if (!initialized_)

{

// caches the sin and cos of all angles

CreateCache(scan_msg);

if (!GetBaseToLaserTf(scan_msg->header.frame_id))

{

ROS_WARN(“Skipping scan”);

return;

}

LaserScanToLDP(scan_msg, prev_ldp_scan_);

last_icp_time_ = current_time_;

initialized_ = true;

return;

}

start_time_ = std::chrono::steady_clock::now();

LDP curr_ldp_scan;

LaserScanToLDP(scan_msg, curr_ldp_scan);

end_time_ = std::chrono::steady_clock::now();

time_used_ = std::chrono::duration_cast<std::chrono::duration>(end_time_ - start_time_);

start_time_ = std::chrono::steady_clock::now();

ScanMatchWithPLICP(curr_ldp_scan, current_time_);

end_time_ = std::chrono::steady_clock::now();

time_used_ = std::chrono::duration_cast<std::chrono::duration>(end_time_ - start_time_);

}

void ScanMatchPLICP::CreateCache(const sensor_msgs::LaserScan::ConstPtr &scan_msg)

{

a_cos_.clear();

a_sin_.clear();

double angle;

for (unsigned int i = 0; i < scan_msg->ranges.size(); i++)

{

angle = scan_msg->angle_min + i * scan_msg->angle_increment;

a_cos_.push_back(cos(angle));

a_sin_.push_back(sin(angle));

}

input_.min_reading = scan_msg->range_min;

input_.max_reading = scan_msg->range_max;

}

bool ScanMatchPLICP::GetBaseToLaserTf(const std::string &frame_id)

{

ros::Time t = ros::Time::now();

geometry_msgs::TransformStamped transformStamped;

try

{

transformStamped = tfBuffer_.lookupTransform(base_frame_, frame_id,

t, ros::Duration(1.0));

}

catch (tf2::TransformException &ex)

{

ROS_WARN(“%s”, ex.what());

ros::Duration(1.0).sleep();

return false;

}

tf2::fromMsg(transformStamped.transform, base_to_laser_);

laser_to_base_ = base_to_laser_.inverse();

return true;

}

void ScanMatchPLICP::LaserScanToLDP(const sensor_msgs::LaserScan::ConstPtr &scan_msg, LDP &ldp)

{

unsigned int n = scan_msg->ranges.size();

ldp = ld_alloc_new(n);

for (unsigned int i = 0; i < n; i++)

{

// calculate position in laser frame

double r = scan_msg->ranges[i];

if (r > scan_msg->range_min && r < scan_msg->range_max)

{

// fill in laser scan data

ldp->valid[i] = 1;

ldp->readings[i] = r;

}

else

{

ldp->valid[i] = 0;

ldp->readings[i] = -1; // for invalid range

}

ldp->theta[i] = scan_msg->angle_min + i * scan_msg->angle_increment;

ldp->cluster[i] = -1;

}

ldp->min_theta = ldp->theta[0];

ldp->max_theta = ldp->theta[n - 1];

ldp->odometry[0] = 0.0;

ldp->odometry[1] = 0.0;

ldp->odometry[2] = 0.0;

ldp->true_pose[0] = 0.0;

ldp->true_pose[1] = 0.0;

ldp->true_pose[2] = 0.0;

}

void ScanMatchPLICP::ScanMatchWithPLICP(LDP &curr_ldp_scan, const ros::Time &time)

{

// CSM is used in the following way:

// The scans are always in the laser frame

// The reference scan (prevLDPcan_) has a pose of [0, 0, 0]

// The new scan (currLDPScan) has a pose equal to the movement

// of the laser in the laser frame since the last scan

// The computed correction is then propagated using the tf machinery

prev_ldp_scan_->odometry[0] = 0.0;

prev_ldp_scan_->odometry[1] = 0.0;

prev_ldp_scan_->odometry[2] = 0.0;

prev_ldp_scan_->estimate[0] = 0.0;

prev_ldp_scan_->estimate[1] = 0.0;

prev_ldp_scan_->estimate[2] = 0.0;

prev_ldp_scan_->true_pose[0] = 0.0;

prev_ldp_scan_->true_pose[1] = 0.0;

prev_ldp_scan_->true_pose[2] = 0.0;

input_.laser_ref = prev_ldp_scan_;

input_.laser_sens = curr_ldp_scan;

double dt = (time - last_icp_time_).toSec();

double pr_ch_x, pr_ch_y, pr_ch_a;

GetPrediction(pr_ch_x, pr_ch_y, pr_ch_a, dt);

tf2::Transform prediction_change;

CreateTfFromXYTheta(pr_ch_x, pr_ch_y, pr_ch_a, prediction_change);

// account for the change since the last kf, in the fixed frame

prediction_change = prediction_change * (base_in_odom_ * base_in_odom_keyframe_.inverse());

// the predicted change of the laser’s position, in the laser frame

tf2::Transform prediction_change_lidar;

prediction_change_lidar = laser_to_base_ * base_in_odom_.inverse() * prediction_change * base_in_odom_ * base_to_laser_;

input_.first_guess[0] = prediction_change_lidar.getOrigin().getX();

input_.first_guess[1] = prediction_change_lidar.getOrigin().getY();

input_.first_guess[2] = tf2::getYaw(prediction_change_lidar.getRotation());

// If they are non-Null, free covariance gsl matrices to avoid leaking memory

if (output_.cov_x_m)

{

gsl_matrix_free(output_.cov_x_m);

output_.cov_x_m = 0;

}

if (output_.dx_dy1_m)

{

gsl_matrix_free(output_.dx_dy1_m);

output_.dx_dy1_m = 0;

}

if (output_.dx_dy2_m)

{

gsl_matrix_free(output_.dx_dy2_m);

output_.dx_dy2_m = 0;

}

start_time_ = std::chrono::steady_clock::now();

sm_icp(&input_, &output_);

end_time_ = std::chrono::steady_clock::now();

time_used_ = std::chrono::duration_cast<std::chrono::duration>(end_time_ - start_time_);

tf2::Transform corr_ch;

if (output_.valid)

{

tf2::Transform corr_ch_l;

CreateTfFromXYTheta(output_.x[0], output_.x[1], output_.x[2], corr_ch_l);

corr_ch = base_to_laser_ * corr_ch_l * laser_to_base_;

base_in_odom_ = base_in_odom_keyframe_ * corr_ch;

latest_velocity_.linear.x = corr_ch.getOrigin().getX() / dt;

latest_velocity_.angular.z = tf2::getYaw(corr_ch.getRotation()) / dt;

}

else

{

ROS_WARN(“not Converged”);

}

PublishTFAndOdometry();

if (NewKeyframeNeeded(corr_ch))

{

ld_free(prev_ldp_scan_);

prev_ldp_scan_ = curr_ldp_scan;

base_in_odom_keyframe_ = base_in_odom_;

}

else

{

ld_free(curr_ldp_scan);

}

last_icp_time_ = time;

}

void ScanMatchPLICP::GetPrediction(double &prediction_change_x,

double &prediction_change_y,

double &prediction_change_angle,

double dt)

{

prediction_change_x = latest_velocity_.linear.x < 1e-6 ? 0.0 : dt * latest_velocity_.linear.x;

prediction_change_y = latest_velocity_.linear.y < 1e-6 ? 0.0 : dt * latest_velocity_.linear.y;

prediction_change_angle = latest_velocity_.linear.z < 1e-6 ? 0.0 : dt * latest_velocity_.linear.z;

if (prediction_change_angle >= M_PI)

prediction_change_angle -= 2.0 * M_PI;

else if (prediction_change_angle < -M_PI)

prediction_change_angle += 2.0 * M_PI;

}

void ScanMatchPLICP::CreateTfFromXYTheta(double x, double y, double theta, tf2::Transform &t)

{

t.setOrigin(tf2::Vector3(x, y, 0.0));

tf2::Quaternion q;

q.setRPY(0.0, 0.0, theta);

t.setRotation(q);

}

void ScanMatchPLICP::PublishTFAndOdometry()

{

geometry_msgs::TransformStamped tf_msg;

tf_msg.header.stamp = current_time_;

tf_msg.header.frame_id = odom_frame_;

tf_msg.child_frame_id = base_frame_;

tf_msg.transform = tf2::toMsg(base_in_odom_);

tf_broadcaster_.sendTransform(tf_msg);

nav_msgs::Odometry odom_msg;

odom_msg.header.stamp = current_time_;

odom_msg.header.frame_id = odom_frame_;

odom_msg.child_frame_id = base_frame_;

tf2::toMsg(base_in_odom_, odom_msg.pose.pose);

odom_msg.twist.twist = latest_velocity_;

odom_publisher_.publish(odom_msg);

}

bool ScanMatchPLICP::NewKeyframeNeeded(const tf2::Transform &d)

{

scan_count_++;

if (fabs(tf2::getYaw(d.getRotation())) > kf_dist_angular_)

return true;

if (scan_count_ == kf_scan_count_)

{

scan_count_ = 0;

return true;

}

double x = d.getOrigin().getX();

double y = d.getOrigin().getY();

if (x * x + y * y > kf_dist_linear_sq_)

return true;

return false;

}

int main(int argc, char **argv)

{

ros::init(argc, argv, “lesson3_scan_match_plicp_node”);

ScanMatchPLICP scan_match_plicp;

ros::spin();

return 0;

}

#include "lesson3/plicp_odometry.h"
#include "tf2_geometry_msgs/tf2_geometry_msgs.h"
​
ScanMatchPLICP::ScanMatchPLICP() : private_node_("~"), tf_listener_(tfBuffer_)
{
    ROS_INFO_STREAM("\033[1;32m----> PLICP odometry started.\033[0m");
​
    laser_scan_subscriber_ = node_handle_.subscribe(
        "scan", 1, &ScanMatchPLICP::ScanCallback, this);
​
    odom_publisher_ = node_handle_.advertise<nav_msgs::Odometry>("odom", 50);
​
    InitParams();
​
    scan_count_ = 0;
​
    initialized_ = false;
​
    base_in_odom_.setIdentity();
    base_in_odom_keyframe_.setIdentity();
​
    input_.laser[0] = 0.0;
    input_.laser[1] = 0.0;
    input_.laser[2] = 0.0;
​
    // Initialize output_ vectors as Null for error-checking
    output_.cov_x_m = 0;
    output_.dx_dy1_m = 0;
    output_.dx_dy2_m = 0;
}
​
ScanMatchPLICP::~ScanMatchPLICP()
{
}
​
void ScanMatchPLICP::InitParams()
{
    private_node_.param<std::string>("odom_frame", odom_frame_, "odom");
    private_node_.param<std::string>("base_frame", base_frame_, "base_link");
    // **** keyframe params: when to generate the keyframe scan
    // if either is set to 0, reduces to frame-to-frame matching
    private_node_.param<double>("kf_dist_linear", kf_dist_linear_, 0.1);
    private_node_.param<double>("kf_dist_angular", kf_dist_angular_, 5.0 * (M_PI / 180.0));
    kf_dist_linear_sq_ = kf_dist_linear_ * kf_dist_linear_;
    private_node_.param<int>("kf_scan_count", kf_scan_count_, 10);
​
    // comments copied from algos.h (by Andrea Censi)
​
    // Maximum angular displacement between scans
    if (!private_node_.getParam("max_angular_correction_deg", input_.max_angular_correction_deg))
        input_.max_angular_correction_deg = 45.0;
​
    // Maximum translation between scans (m)
    if (!private_node_.getParam("max_linear_correction", input_.max_linear_correction))
        input_.max_linear_correction = 1.0;
​
    // Maximum ICP cycle iterations
    if (!private_node_.getParam("max_iterations", input_.max_iterations))
        input_.max_iterations = 10;
​
    // A threshold for stopping (m)
    if (!private_node_.getParam("epsilon_xy", input_.epsilon_xy))
        input_.epsilon_xy = 0.000001;
​
    // A threshold for stopping (rad)
    if (!private_node_.getParam("epsilon_theta", input_.epsilon_theta))
        input_.epsilon_theta = 0.000001;
​
    // Maximum distance for a correspondence to be valid
    if (!private_node_.getParam("max_correspondence_dist", input_.max_correspondence_dist))
        input_.max_correspondence_dist = 1.0;
​
    // Noise in the scan (m)
    if (!private_node_.getParam("sigma", input_.sigma))
        input_.sigma = 0.010;
​
    // Use smart tricks for finding correspondences.
    if (!private_node_.getParam("use_corr_tricks", input_.use_corr_tricks))
        input_.use_corr_tricks = 1;
​
    // Restart: Restart if error is over threshold
    if (!private_node_.getParam("restart", input_.restart))
        input_.restart = 0;
​
    // Restart: Threshold for restarting
    if (!private_node_.getParam("restart_threshold_mean_error", input_.restart_threshold_mean_error))
        input_.restart_threshold_mean_error = 0.01;
​
    // Restart: displacement for restarting. (m)
    if (!private_node_.getParam("restart_dt", input_.restart_dt))
        input_.restart_dt = 1.0;
​
    // Restart: displacement for restarting. (rad)
    if (!private_node_.getParam("restart_dtheta", input_.restart_dtheta))
        input_.restart_dtheta = 0.1;
​
    // Max distance for staying in the same clustering
    if (!private_node_.getParam("clustering_threshold", input_.clustering_threshold))
        input_.clustering_threshold = 0.25;
​
    // Number of neighbour rays used to estimate the orientation
    if (!private_node_.getParam("orientation_neighbourhood", input_.orientation_neighbourhood))
        input_.orientation_neighbourhood = 20;
​
    // If 0, it's vanilla ICP
    if (!private_node_.getParam("use_point_to_line_distance", input_.use_point_to_line_distance))
        input_.use_point_to_line_distance = 1;
​
    // Discard correspondences based on the angles
    if (!private_node_.getParam("do_alpha_test", input_.do_alpha_test))
        input_.do_alpha_test = 0;
​
    // Discard correspondences based on the angles - threshold angle, in degrees
    if (!private_node_.getParam("do_alpha_test_thresholdDeg", input_.do_alpha_test_thresholdDeg))
        input_.do_alpha_test_thresholdDeg = 20.0;
​
    // Percentage of correspondences to consider: if 0.9,
    // always discard the top 10% of correspondences with more error
    if (!private_node_.getParam("outliers_maxPerc", input_.outliers_maxPerc))
        input_.outliers_maxPerc = 0.90;
​
    // Parameters describing a simple adaptive algorithm for discarding.
    //  1) Order the errors.
    //  2) Choose the percentile according to outliers_adaptive_order.
    //     (if it is 0.7, get the 70% percentile)
    //  3) Define an adaptive threshold multiplying outliers_adaptive_mult
    //     with the value of the error at the chosen percentile.
    //  4) Discard correspondences over the threshold.
    //  This is useful to be conservative; yet remove the biggest errors.
    if (!private_node_.getParam("outliers_adaptive_order", input_.outliers_adaptive_order))
        input_.outliers_adaptive_order = 0.7;
​
    if (!private_node_.getParam("outliers_adaptive_mult", input_.outliers_adaptive_mult))
        input_.outliers_adaptive_mult = 2.0;
​
    // If you already have a guess of the solution, you can compute the polar angle
    // of the points of one scan in the new position. If the polar angle is not a monotone
    // function of the readings index, it means that the surface is not visible in the
    // next position. If it is not visible, then we don't use it for matching.
    if (!private_node_.getParam("do_visibility_test", input_.do_visibility_test))
        input_.do_visibility_test = 0;
​
    // no two points in laser_sens can have the same corr.
    if (!private_node_.getParam("outliers_remove_doubles", input_.outliers_remove_doubles))
        input_.outliers_remove_doubles = 1;
​
    // If 1, computes the covariance of ICP using the method http://purl.org/censi/2006/icpcov
    if (!private_node_.getParam("do_compute_covariance", input_.do_compute_covariance))
        input_.do_compute_covariance = 0;
​
    // Checks that find_correspondences_tricks gives the right answer
    if (!private_node_.getParam("debug_verify_tricks", input_.debug_verify_tricks))
        input_.debug_verify_tricks = 0;
​
    // If 1, the field 'true_alpha' (or 'alpha') in the first scan is used to compute the
    // incidence beta, and the factor (1/cos^2(beta)) used to weight the correspondence.");
    if (!private_node_.getParam("use_ml_weights", input_.use_ml_weights))
        input_.use_ml_weights = 0;
​
    // If 1, the field 'readings_sigma' in the second scan is used to weight the
    // correspondence by 1/sigma^2
    if (!private_node_.getParam("use_sigma_weights", input_.use_sigma_weights))
        input_.use_sigma_weights = 0;
}
​
void ScanMatchPLICP::ScanCallback(const sensor_msgs::LaserScan::ConstPtr &scan_msg)
{
    current_time_ = scan_msg->header.stamp;
    if (!initialized_)
    {
        // caches the sin and cos of all angles
        CreateCache(scan_msg);
​
        if (!GetBaseToLaserTf(scan_msg->header.frame_id))
        {
            ROS_WARN("Skipping scan");
            return;
        }
​
        LaserScanToLDP(scan_msg, prev_ldp_scan_);
        last_icp_time_ = current_time_;
        initialized_ = true;
        return;
    }
​
    start_time_ = std::chrono::steady_clock::now();
​
    LDP curr_ldp_scan;
    LaserScanToLDP(scan_msg, curr_ldp_scan);
​
    end_time_ = std::chrono::steady_clock::now();
    time_used_ = std::chrono::duration_cast<std::chrono::duration<double>>(end_time_ - start_time_);
​

    start_time_ = std::chrono::steady_clock::now();
​
    ScanMatchWithPLICP(curr_ldp_scan, current_time_);
​
    end_time_ = std::chrono::steady_clock::now();
    time_used_ = std::chrono::duration_cast<std::chrono::duration<double>>(end_time_ - start_time_);
}
​
void ScanMatchPLICP::CreateCache(const sensor_msgs::LaserScan::ConstPtr &scan_msg)
{
    a_cos_.clear();
    a_sin_.clear();
    double angle;
​
    for (unsigned int i = 0; i < scan_msg->ranges.size(); i++)
    {
        angle = scan_msg->angle_min + i * scan_msg->angle_increment;
        a_cos_.push_back(cos(angle));
        a_sin_.push_back(sin(angle));
    }
​
    input_.min_reading = scan_msg->range_min;
    input_.max_reading = scan_msg->range_max;
}
​
bool ScanMatchPLICP::GetBaseToLaserTf(const std::string &frame_id)
{
    ros::Time t = ros::Time::now();
​
    geometry_msgs::TransformStamped transformStamped;
    try
    {
        transformStamped = tfBuffer_.lookupTransform(base_frame_, frame_id,
                                                     t, ros::Duration(1.0));
    }
    catch (tf2::TransformException &ex)
    {
        ROS_WARN("%s", ex.what());
        ros::Duration(1.0).sleep();
        return false;
    }
​
    tf2::fromMsg(transformStamped.transform, base_to_laser_);
    laser_to_base_ = base_to_laser_.inverse();
​
    return true;
}
​
void ScanMatchPLICP::LaserScanToLDP(const sensor_msgs::LaserScan::ConstPtr &scan_msg, LDP &ldp)
{
    unsigned int n = scan_msg->ranges.size();
    ldp = ld_alloc_new(n);
​
    for (unsigned int i = 0; i < n; i++)
    {
        // calculate position in laser frame
        double r = scan_msg->ranges[i];
​
        if (r > scan_msg->range_min && r < scan_msg->range_max)
        {
            // fill in laser scan data
​
            ldp->valid[i] = 1;
            ldp->readings[i] = r;
        }
        else
        {
            ldp->valid[i] = 0;
            ldp->readings[i] = -1; // for invalid range
        }
​
        ldp->theta[i] = scan_msg->angle_min + i * scan_msg->angle_increment;
        ldp->cluster[i] = -1;
    }
​
    ldp->min_theta = ldp->theta[0];
    ldp->max_theta = ldp->theta[n - 1];
​
    ldp->odometry[0] = 0.0;
    ldp->odometry[1] = 0.0;
    ldp->odometry[2] = 0.0;
​
    ldp->true_pose[0] = 0.0;
    ldp->true_pose[1] = 0.0;
    ldp->true_pose[2] = 0.0;
}
​
void ScanMatchPLICP::ScanMatchWithPLICP(LDP &curr_ldp_scan, const ros::Time &time)
{
    // CSM is used in the following way:
    // The scans are always in the laser frame
    // The reference scan (prevLDPcan_) has a pose of [0, 0, 0]
    // The new scan (currLDPScan) has a pose equal to the movement
    // of the laser in the laser frame since the last scan
    // The computed correction is then propagated using the tf machinery
​
    prev_ldp_scan_->odometry[0] = 0.0;
    prev_ldp_scan_->odometry[1] = 0.0;
    prev_ldp_scan_->odometry[2] = 0.0;
​
    prev_ldp_scan_->estimate[0] = 0.0;
    prev_ldp_scan_->estimate[1] = 0.0;
    prev_ldp_scan_->estimate[2] = 0.0;
​
    prev_ldp_scan_->true_pose[0] = 0.0;
    prev_ldp_scan_->true_pose[1] = 0.0;
    prev_ldp_scan_->true_pose[2] = 0.0;
​
    input_.laser_ref = prev_ldp_scan_;
    input_.laser_sens = curr_ldp_scan;
​
    double dt = (time - last_icp_time_).toSec();
    double pr_ch_x, pr_ch_y, pr_ch_a;
    GetPrediction(pr_ch_x, pr_ch_y, pr_ch_a, dt);
​
    tf2::Transform prediction_change;
    CreateTfFromXYTheta(pr_ch_x, pr_ch_y, pr_ch_a, prediction_change);
​
    // account for the change since the last kf, in the fixed frame
    prediction_change = prediction_change * (base_in_odom_ * base_in_odom_keyframe_.inverse());
​
    // the predicted change of the laser's position, in the laser frame
    tf2::Transform prediction_change_lidar;
    prediction_change_lidar = laser_to_base_ * base_in_odom_.inverse() * prediction_change * base_in_odom_ * base_to_laser_;
​
    input_.first_guess[0] = prediction_change_lidar.getOrigin().getX();
    input_.first_guess[1] = prediction_change_lidar.getOrigin().getY();
    input_.first_guess[2] = tf2::getYaw(prediction_change_lidar.getRotation());
​
    // If they are non-Null, free covariance gsl matrices to avoid leaking memory
    if (output_.cov_x_m)
    {
        gsl_matrix_free(output_.cov_x_m);
        output_.cov_x_m = 0;
    }
    if (output_.dx_dy1_m)
    {
        gsl_matrix_free(output_.dx_dy1_m);
        output_.dx_dy1_m = 0;
    }
    if (output_.dx_dy2_m)
    {
        gsl_matrix_free(output_.dx_dy2_m);
        output_.dx_dy2_m = 0;
    }
    
    start_time_ = std::chrono::steady_clock::now();
    sm_icp(&input_, &output_);
​
    end_time_ = std::chrono::steady_clock::now();
    time_used_ = std::chrono::duration_cast<std::chrono::duration<double>>(end_time_ - start_time_);
​
    tf2::Transform corr_ch;
​
    if (output_.valid)
    {
        tf2::Transform corr_ch_l;
        CreateTfFromXYTheta(output_.x[0], output_.x[1], output_.x[2], corr_ch_l);
​
        corr_ch = base_to_laser_ * corr_ch_l * laser_to_base_;
​
        base_in_odom_ = base_in_odom_keyframe_ * corr_ch;
​
        latest_velocity_.linear.x = corr_ch.getOrigin().getX() / dt;
        latest_velocity_.angular.z = tf2::getYaw(corr_ch.getRotation()) / dt;
    }
    else
    {
        ROS_WARN("not Converged");
    }
​
    PublishTFAndOdometry();
​
    if (NewKeyframeNeeded(corr_ch))
    {
        ld_free(prev_ldp_scan_);
        prev_ldp_scan_ = curr_ldp_scan;
        base_in_odom_keyframe_ = base_in_odom_;
    }
    else
    {
        ld_free(curr_ldp_scan);
    }
​
    last_icp_time_ = time;
}
​
void ScanMatchPLICP::GetPrediction(double &prediction_change_x,
                                   double &prediction_change_y,
                                   double &prediction_change_angle,
                                   double dt)
{
    prediction_change_x = latest_velocity_.linear.x < 1e-6 ? 0.0 : dt * latest_velocity_.linear.x;
    prediction_change_y = latest_velocity_.linear.y < 1e-6 ? 0.0 : dt * latest_velocity_.linear.y;
    prediction_change_angle = latest_velocity_.linear.z < 1e-6 ? 0.0 : dt * latest_velocity_.linear.z;
​
    if (prediction_change_angle >= M_PI)
        prediction_change_angle -= 2.0 * M_PI;
    else if (prediction_change_angle < -M_PI)
        prediction_change_angle += 2.0 * M_PI;
}
​
void ScanMatchPLICP::CreateTfFromXYTheta(double x, double y, double theta, tf2::Transform &t)
{
    t.setOrigin(tf2::Vector3(x, y, 0.0));
    tf2::Quaternion q;
    q.setRPY(0.0, 0.0, theta);
    t.setRotation(q);
}
​
void ScanMatchPLICP::PublishTFAndOdometry()
{
    geometry_msgs::TransformStamped tf_msg;
    tf_msg.header.stamp = current_time_;
    tf_msg.header.frame_id = odom_frame_;
    tf_msg.child_frame_id = base_frame_;
    tf_msg.transform = tf2::toMsg(base_in_odom_);
​
    tf_broadcaster_.sendTransform(tf_msg);
​
    nav_msgs::Odometry odom_msg;
    odom_msg.header.stamp = current_time_;
    odom_msg.header.frame_id = odom_frame_;
    odom_msg.child_frame_id = base_frame_;
    tf2::toMsg(base_in_odom_, odom_msg.pose.pose);
    odom_msg.twist.twist = latest_velocity_;
​
    odom_publisher_.publish(odom_msg);
}
​
bool ScanMatchPLICP::NewKeyframeNeeded(const tf2::Transform &d)
{
    scan_count_++;
​
    if (fabs(tf2::getYaw(d.getRotation())) > kf_dist_angular_)
        return true;
​
    if (scan_count_ == kf_scan_count_)
    {
        scan_count_ = 0;
        return true;
    }
        
    double x = d.getOrigin().getX();
    double y = d.getOrigin().getY();
    if (x * x + y * y > kf_dist_linear_sq_)
        return true;
​
    return false;
}
​
int main(int argc, char **argv)
{
    ros::init(argc, argv, "lesson3_scan_match_plicp_node"); 
    ScanMatchPLICP scan_match_plicp;
​
    ros::spin();
    return 0;
}

The running effect is shown in the figure.

The tf is relatively simple, with a fixed pose between the laser and base_link. Here, the static tf is directly released; Baseunlink and odom have coordinate relationships provided by radar odometer data, so this is a dynamic tf that is calculated and sent out together with odom data.

Let’s take a look at the node relationship diagram section of rqt_graph. lesson3_plicp-odometry node has subscribed to the radar data and static TF coordinate system of lakibeam_lidar, and the display of odom data in Rviz is published through TF.