dirtybot
Open source address: https://github.com/MM-X/dirtybot ** dirtybot (diety bea) is a smart desktop voice car based on Horizon X3 and Togetheros and Arduino, with voice interaction, perception, Slam mapping and planning control **, its advantages are:
- ** Using a small amount of 3D printing and acrylic processing, strong DIY, using mature parts, low cost **;
- ** Using offline voice board (only 64 RMB), interface programming (no need to knock code), greatly enhance the car’s playability **;
- ** The lower layer uses Arduino Nano (328P, 8bit) board, easy to program to realize their own ideas, more than ten yuan still on 100HZ**;
- ** The upper layer is based on the Horizon X3 PI (cheaper than the Raspberry PI and much better than it) and Togetheros (ros2), with ready-made sensing algorithms and some applications (body tracking, gesture control) **.
In short, this project has the characteristics of easy to use and strong playability, which is very suitable for people with some hands-on skills. Of course, if you want to play a ready-made, strongly recommend GuYueJu * * * * * * OriginBot intelligent robot, an open source suite * * https://www.originbot.org/
Suggestions for implementable features:
Divided by specific functions:
- ** Self-cleaning ** - Push the large trash on the desktop to the floor, and collect the small trash (and dust) into the trash storage room
- ** Voice control **, mobile App control, keyboard control, gesture control **
- ** Human body tracking **, cat and dog tracking, ** voice interaction ** (Go to the cup, go to the trash can, push the cup back, scramble the table items after building the map to restore the car, broadcast the content detected by the camera, find people standing in front of the car: don’t be my way, etc., the function depends on your imagination)
Divided by areas involved:
- ** Voice interaction ** : voice wake up, voice switch, voice control vehicle movement, voice adjustment, voice start and stop process (on and off human tracking control), voice broadcast of recognized objects
- ** Sensing ** : Target detection and segmentation, and prediction of the plane position and size of the target based on the position of the vehicle (combined cam and lidar)
- ** Location Build ** : slam build and update (including positive obstacle - cup, combined with tof build negative obstacle - desktop edge)
- ** Planning Control ** : navigation planning, path planning, tracking control
Software composition:
# Arduino:
The arduino uses two libraries, which share one i2c with two i2c bus devices:
- tof:https://github.com/pololu/vl53l0x-arduino
- Mpu9250 (imu calibration procedure in the course of this library, the nano limited memory, through routine procedure is necessary if needs calibration of imu calibration) : https://github.com/hideakitai/MPU9250
Port definition:
** Special Note: ** This warehouse ros2 directory code from GuYueJu [OriginBot] (https://gitee.com/guyuehome/originbot), open source code repository: https://gitee.com/guyuehome/originbot, and according to the lower level sensor adapter needs to do some changes, currently only modified originbot_base and originbot_msgs in two folders of the part of the code; And the upper and lower communication protocols are also based on the ‘OriginBot’ serial port protocol made corresponding modifications. Here special thanks to ** Guyueju **, leading many people into the ros paradise.
const uint8_t Motor1_PinA = 5;
const uint8_t Motor1_PinB = 6;
const uint8_t Motor2_PinA = 9;
const uint8_t Motor2_PinB = 10;
const uint8_t Encoder1_PinA = 2;
const uint8_t Encoder1_PinB = 4;
const uint8_t Encoder2_PinA = 3;
const uint8_t Encoder2_PinB = 7;
SoftwareSerial mySerial(12, 13);
The difference is:
- imu data sending = processed data /100, not raw data
- There is no voltage value in the sensor feedback, and there is the mcu cycle time and the voice command of the forwarding voice board
- Controller resources control the voice board commands received from the x3pi to be forwarded
** Since the MCU and voice board words can transmit up to 4 bytes, and the voice board processing logic is limited, the transmission between the mcu and them can only rely on the predefined protocol (such as 1 = self-built through), and the communication between X3pi and the voice board is transmitted through the MCU, so it is also necessary to define their previous protocol. For example, the 80 types of FCOS detection algorithm correspond to 80 integers, so as to achieve the voice broadcast of the detected target name **.
End
- Welcome good suggestions, ask questions in the Issue, check back occasionally
- This document will be synced to the Horizon developer community, everyone is welcome to view, comment
- The project is completely open source, welcome to change
- Keep updating when you have time…