At present, most competitions related to robot machine vision in university involve preventing robot derailment, which may be to reduce the difficulty in undergraduate stage. Therefore, navigation maps are available in competition sites, which are generally composed of solid black lines. As long as the robot can stay on the line without derailment, the position of the robot can be determined according to the navigation line. In order to carry out the task of handling or identification, this article I will introduce the machine vision anti-derailment algorithm and its implementation principle of the handling robot that I use the rising sun X3 as the upper computer.
1 Introduction to the running status of the robot on the map navigation track
First look at the navigation map
The carrying robot is in this state on the map
Since photographing a physical robot does not look very vivid (mainly because the robot is too ugly), I use the following pictures to explain the principle in order to vividly understand it. After each turn of the handling robot, the place where the camera is right is a long line. A more vivid explanation is shown in Figure 2 below
When the central axis of the handling robot coincides with the central axis of the navigation line, that is, when the handling robot is in the middle of the navigation line, this situation is the best navigation condition, and the handling robot is in the middle of the road. The navigation line from the top camera looks like this, right in the middle of the picture
When the handling robot is shifted left and right, the robot is in this condition
The navigation line from the camera directly above us looks like this
It can be seen that the thing to do is to take pictures of the navigation line through the camera in real time, and after a series of algorithm processing, obtain the deviation of the handling robot relative to the navigation line, and feed the situation to the next machine in real time. What the next machine needs to do is to receive the deviation information and adjust the carrying robot back to the middle of the orbit through the attitude adjustment algorithm.
2. Use the Rising Sun X3 to analyze the body attitude of the handling robot with opencv, and send the analysis results back to the lower machine through serial communication to explain the code principle
First of all, the implementation principle is outlined. First, the captured navigation line image is filtered to remove the noise. Then it is converted into grayscale map, and then the grayscale picture is binarized. When processing here, the binarization result only contains 0 – white map and 255 – navigation line, so that the central axis of the navigation line can be found mathematically. Because the camera is fixed to the body position, the median line of the picture captured by the camera is the location of the central axis of the body. The coordinates and slopes of the two central axes are compared, and the attitude of the handling robot body relative to the central axis can be analyzed, and the analysis results can be sent to the lower machine through serial communication.
Import the libraries you want to use
import cv2 as cv
import time
import numpy as np
import sys
import os
import serial
import serial.tools.list_ports
Set various parameters of the serial port, set the baud rate to 115200, and use UART3 in 40PIN (it seems that only this one can be selected).
os.system('ls /dev/tty[a-zA-Z]*')
uart_dev= '/dev/ttyS3'
baudrate = 115200
ser = serial.Serial(uart_dev, int(baudrate), timeout=1)
Select Camera No. 8 for video capture
cap_follow = cv.VideoCapture(8)
Set two variables to record the coordinates of the two central axes, respectively
line_1 = 0
line_2 = 0
Take a frame and filter the median. Here we explain some of the reasons why the median filter is used. Mean filtering, box filtering and Gaussian filtering are all linear filtering methods. Since the result of linear filtering is a linear combination of all pixel values, pixels that contain noise are also taken into account, and noise is not eliminated, but is present in a softer manner. In this case, it may be better to use nonlinear filtering. The median filtering is different from the filtering method introduced above, and the weighted mean is no longer used to calculate the filtering result. It replaces the current pixel value with the middle value of all pixel values in the neighborhood.
ret, frame = cap_follow.read()
blur = cv.medianBlur(frame, 15)
The image obtained by cutting only displays and processes a positive small and medium-sized block, which can be referred to the following picture, because the processing of the whole picture takes up computing power and the result is the same as local processing, so the picture is first cropped to a small useful area, and then gray scale and binary processing
ROI = blur[0:210, 345:605] # try to limit the ROI
gray = cv.cvtColor(ROI, cv.COLOR_BGR2GRAY)
cv.imshow("gray", gray)
ret_val, dst = cv.threshold(gray, 0, 255, cv.THRESH_OTSU)
Find the central axis of the handling robot and the central axis of the navigation line
n = [len(dst[:, 0]), len(dst[0, :])]
(x_point, y_point) = np.nonzero(dst)
if len(x_point) < 2:
return 0
f1 = np.polyfit(x_point, y_point, 1)
p1 = np.poly1d(f1)
point1 = (int(p1(0)), 0)
point2 = (int(p1(n[0])), n[0])
Mark the line on the processed image so that it can be observed during debugging
ROI = cv.line(ROI, (int(n[1] / 2), 0), (int(n[1] / 2), n[0]), (0, 0, 255), 2)
ROI = cv.line(ROI, point1, point2, (0, 255, 0), 2)
Display the processed grayscale pictures and the grayscale pictures marked with the central axis on the computer for easy observation during debugging (the code on the X3pi deleted the display function, because the serve version has no desktop, the HDMI display is very troublesome and occupies computing power)
cv.imshow("gray", gray)
cv.imshow('inside',ROI)
Output the coordinate difference and slope difference between the two central axes in the debug window
print(int( (n[1] / 2 - p1(n[0]))/20 ))
print(f1[1])
line_1 = n[1] / 2 - p1(n[0])
line_2 = f1[1]
distance = abs(int((n[1] / 2 - p1(n[0])) / 15))
The two differences are processed into 0-90 gradients and transmitted to the lower computer through the serial port
if line_1 < -5 and line_2 > 141:
print('left')
ser.write(b'l')
ser.write("{}".format(distance).encode('utf-8'))
elif 5 < line_1 and line_2 < 136:
print('right')
ser.write(b'r')
ser.write("{}".format(distance).encode('utf-8'))
else:
print('go stright')
ser.write(b's')
ser.write("{}".format(distance).encode('utf-8'))
3 Operation effect display
In order to facilitate the demonstration of the use of computer simulation of the handling robot (I wrote a processing program for simulation), mainly when the handling robot turns on the power, it runs very fast, and the correction after deviation and derailment almost occurs in an instant. In order to facilitate the demonstration, I wrote a simulated small program. The video of carrying robots running is placed at the end of the article
3.1 processing programs for simulation.
A pair of processing programs are not explained here. The function of this program is to simulate the navigation line and the lower computer. The simulation program can receive the attitude data feedback from the rising Sun X3 through the serial port, and adjust the attitude of the handling robot according to the data, so that the navigation line returns to the middle of the picture taken by the camera. That is, the central axis of the handling robot coincides with the central axis of the navigation line
import processing.serial.*;
Serial serial0;
char d;
char distance;
int distance_int;
int road_x = 700;
int road_z = 0;
void setup() {
size(1500, 1000);
serial0 = new Serial(this,"COM1",115200);
}
void draw(){
if(serial0.available()>0)
{
d=serial0.readChar();
if ( d == 'r')
{
distance = serial0.readChar();
distance_int = int(distance) - 48;
if (distance_int <= 10)
{
road_x = road_x - distance_int;
}
}
if ( d == 'l')
{
distance = serial0.readChar();
distance_int = int(distance) - 48;
if (distance_int <= 10)
{
road_x = road_x + distance_int;
}
}
}
background(255);
strokeWeight(80);
line(road_x,-1,road_x+road_z,1000);
//line(500,-1,500,1000);
//rect(road_x,-1,100,1000);
println(road_x);
fill(0);
}
3.3 Run the program on the computer first (because the computer can see the debug window)
The two small Windows in the upper right corner, without the two straight lines, are the original grayscale pictures after interception, and the other is the simulation of the two central axes
The main body of the video simulates: the camera moves right, and the robot moves right. From the debugging window with the central axis, you can see that the navigation line appears on the left of the image
The lower computer simulation program adjusts the body of the handling robot, and it can be seen that the central axis on the screen also moves to the right, which proves that the data feedback and attitude adjustment algorithm play a role
3.2 Operating the Rising Sun X3 pie
Connect the usb camera to the Rising Sun X3 pie and connect the Rising Sun X3 pie to the computer with the USB-to-TTL downloader
Power on, use ssh to log in, and perform online operations
I created a user folder in the app folder, and put all the code I wrote for testing here
Execute the code in the command serial port and open the simulation software written on the computer side
python3 /app/user/xunxian_pi.py
Use the hand to control the camera to move first to the right, simulate the robot body to the right, then to the left, simulate the left, you can see from the video, can quickly adjust back, the black navigation line follows the camera moves
The command serial port continuously feedback values and commands
left indicates left adjustment, right indicates left adjustment, and go stright indicates that no adjustment is required
The following two lines of numbers are the coordinate adjustment value and the Angle value
live video
Camera handling robot to prevent derailment this link has been done for a long time, really can not count how much time has been used, before also used a lot of other algorithms, corrosion expansion processing, linear filtering, etc., but also used a lot of attitude processing programs, the use of color selective recognition, judge whether derailment and so on, took a lot of detachments, but finally know the method of barely use, Thank God.
Before using Raspberry PI, using a desktop system, I do not know whether it is this reason or the Raspberry PI itself is not enough computing power, when using Raspberry PI, only four or five frames of processing results are fed back every second, always handling robots have been derailed a lot, only detected, there is no time to adjust, on the other hand, Rising Sun X3 PI, processing speed is really fast. The results of the feedback are unreadable to the naked eye because the refresh is too fast, really too fast, and the result is that the adjustments are timely and there are very few derailments, Horizon, YYDS.
I see now a lot of new energy vehicle programs are done by the horizon, really powerful, a few days ago Changan released a new car is the horizon program, really powerful. And there is also a super good, is to have their own wechat group, a group of technology throughout the day to give you questions, really can learn a lot. Thanks to the horizon, I hope the horizon is getting stronger and better.