RBE 3002
I worked with two other classmates to program a LiDAR enabled Turtlebot3 using ROS and Python to navigate and map an unknown maze in both simulation and the real world, as well as localize its position in that map if moved to an unknown spot by utilizing a Monte Carlo simulation.
The Turtlebot moves around an unknown space, checking for walls and determining where to move next to explore unknown areas
Keeping at least 4 terminals open to run various programs in ROS and monitor the bot's decision making
The unknown space as detected by the robot. The green dots are live data from the LiDAR, the black dots are walls the robot has found with the LiDAR data, and the light grey is known free space. Diagram from RViz.
This is the previous picture but after the robot has determined frontiers and C-space. C-space (in purple) represents the areas the center of the bot cannot touch without the edge of the bot hitting a wall. Frontiers (in red) are edges detected with a wave-front algorithm between known free space (in light grey) and unknown space. The robot will attempt to move within the free space to allow the LiDAR to be within line of sight of the frontier. Diagram from RViz.
This is the known space in the field as the robot explores it more. The blue line going through the robot shows the path the robot is attempting to move. Paths were broken up into shorter lengths to allow the robot to check its place in the surrounding space before planning the next short length of movement which prevents the physical bots movements desyncing from where it thinks it is. Diagram from RViz.
Completed map after exploring the whole field. There are some anomalies (presumed free space shooting out from behind walls) but this is not an issue as there are walls separating it from the inner free space and with C-space added the robot will never attempt to navigate there. Diagram from RViz.
After exploring the whole field, the robot is picked up and placed in an unknown location. The green dots represent live LiDAR data and they clearly do not match up with the walls, as the robot is not truly in the position it currently thinks it is. The robot will attempt to localize itself by matching the LiDAR data with the walls to determine its true position. Diagram from RViz.
After the robot has estimated its position well, it can continue operating as its now localized so position the robot thinks it's in matches with the position the physical robot is in. Diagram from RViz.
A* path planning demonstrated in an early version of our code. Red dots represent path nodes, the red line is the path, the black dots are the walls, the purple dots are the C-space, the pink dot is the goal, the orangish dots are explored spaces during A*, and the light pink dots are the frontiers of the wave used to find the goal for A*. This is all done in simulation without a real bot. Diagram from RViz.Â