Saturday, February 15, 2014

Arduino + Matlab SLAM

Purpose:

    Explore the subject of simultaneous localization and mapping (SLAM) robots. I have separated this venture into sever phases that are designed to build upon the previous. Here is a short description of my plan:

Phase 1: Build simple Arduino robot that gets environment data from various ultrasonic distance sensors and begin trying to map the robot's surroundings

Phase 2: Replace Arduino with raspberry pi, ultrasonic sensors with a Xbox Kinect sensor and begin implementing some of the more robust algorithms to make the mapping more accurate

Phase 3: Replace raspberry pi with on-board computer, Kinect sensor with LIDARs, and fully implement SLAM.

    This is a tentative plan and will involve quite a few steps in between each phase, but should be a good template for later revisions.

What is SLAM?

    From the name, you can guess that I are trying to map an environment and at the same time pinpoint the robot's location within said map. The difficulty with SLAM is that it produces sort of a catch-22. In order to know where you are within an environment, you must first have a map of your environment to compare what your sensor data shows. However, you do not have a map and must build one, but in order to build one you must first know where you are within the map you are building[1].

    4 Steps of SLAM [1]:

  1. Acquire sensor data
  2. Discern from the data any landmarks (walls, chair legs, etc.)
  3. Compare found landmarks with know ones via data association
  4. Update location and map

Phase 1 Hardware

The system consists of two parts:
    (1) Arduino controlled robot  
    (2) A laptop running Matlab


    The robot is controlled by an Arduino Mega 2560. I chose this over an Uno or just using a PIC micro because at this stage I need a controller that is easy to setup, easy to program, and has the necessary amount of pins for the robot's sensor payload and other hardware. I have an Arduino Motor Shield on the Mega to interface with the DC drive motors. The chassis of the robot is the DFRobot 2WD Mobile Platform with a Prototyping Plate, also from DFRobot, added to make room for more hardware. So, the robot is a 2WD, 3 tier robot, with the prototyping plate in the middle. The prototyping plate is home to the Mega+Motor Shield, the Xbee, a breadboard to disperse power to sensors, and a 9V battery to power the Mega. Figures 1, 2 and 3 show the robot from the front, left, and right sides, respectively (note: ultrasonic array not depicted). The top of the robot is home to the ultrasonic sensor assembly, which includes the sensor array from Figures 4 and 5, as well as the servo motor that swings it around during data acquisition.


Figure 1: Front side of robot

Figure 2: Left side of robot



Figure 3: Right side of robot

4 Steps of SLAM: Phase 1 Implementation:


  Acquire sensor data

    The sensors I am using is a Parallax PING))) ultrasonic distance sensors. I have 3 sensors mounted on a fixture that is attached to a servo motor that sweeps the array around 180 degrees. Figure 4 and 5 show the configuration. 

Figure 4: Top of sensor array. Sensors are angles 45 degrees apart. 
Figure 5: Front of sensor array.
    The servo the sensors are attached to can rotate 180 degrees, so with one scan, I am able to get a 270 degree view of the robot's surroundings. 

    Before beginning this post, I did some validation testing of whether or not these sensors were robust/reliable enough to get good looking data. During this testing I only used the sensor position at 90 degrees to gather data, so only 180 degrees will be shown. Figure 6 shows an overlay of the data I got onto the surface I scanned (my workbench).


Figure 6: Distance readings overlay on workbench. 















 
    This is the raw output of the ultrasonic sensor. I was pretty happy with the results of this test considering its only a $30 sensor giving me only 2D data. I am hoping with the addition of the other 2 sensors I will get better definition of objects. 

    Before I move on with more data acquisition, I did some simulations of how my dead-reckoning algorithms would hold up on this current platform. Unfortunately, I went the low-cost route for my motor encoders and they only have 10 ticks/rev (I know, its horrid). So, I developed a small Matlab simulation that imports a true travel path and exports a measured path that includes measurement error and a basic Kalman filter adjusted path to try and better predict the robot's position. Figure 7 shows a single run of the simulation and Figure 8 shows the error that accumulated during simulation.


Figure 7: Single run of dead-reckoning simulation. Estimate path is Kalman filtered.

Figure 8: Resultant error between the estimated position versus the true position. 


    From Figure 8, it is clear that since the encoders have such low resolution, that the largest impact is going to be on the robot's heading. 


[1] http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-2005/projects/1aslam_blas_repo.pdf

7 comments:

  1. Great post. Thanks for taking the time to write this up. I was thinking of doing something similar with an Arduino robot I'm working on for the next couple of weeks. However instead of upgrading the on-board computer, I thought to Bluetooth it over to a device that can generate a map for me. It's just a quick undergrad project so I don't have a lot of time to pour into it and any work I'm putting into this part of the project is considered bonus territory. However I'm quickly realizing that there may not be a quick implementation of a SLAM solution. It actually has me thinking the professor is considering a rudimentary SLAM system as a followup project.

    ReplyDelete
  2. - What is the purpose of using filter?
    - How do you ensure that the bot covers the entire area concerned?
    - How do you keep a track of the robots position in the enviornment?

    ReplyDelete
    Replies
    1. I've updated this project in later posts, however I have been rather busy lately and unable to work on it. But, the purpose of a properly implemented Kalman filter is to take into account the uncertainty that exists in robotics. This uncertainty stems from noisy measurement data, uncertain control commands, etc. Every time the robot moves, there is some error in its heading and position that I told it to move. The Kalman filter leverages this uncertainty in a way to better estimate its current position and ultimately, given enough measurement sources, will result in a less uncertain position.

      As of right now I am not worried about the robot covering the entire area, just beginning to get the robot to perform SLAM and then possibly having it cover the entire area of my apartment. This is beneficial if it happens multiple times because if the software is able to detect loop closures, or times when its viewing landmarks its scene before, then it can use the current measurement of those landmarks and the previously estimates positions of the landmarks to update the trajectory. This can potentially remove quite a lot of error and align trajectories nicely.

      As of right now, I do not have a good way of tracking the robot position apart from the filter on board. I do not have a means of tracking ground truth. One way you could possibly do this is to set up a fiducial marker on the rover and look at it in the enviornment with a Kinect sensor. You can track the fiducial marker in the video stream and then use the depth information to get its position relative to the last, and thus extract less noisy trajectory that way. This method still is not exact, but it could do the job for a comparison.

      Delete
    2. please give me video for implementing project

      Delete
  3. Great post! ,would like too replicate this project.can you please provide the code of this project.

    ReplyDelete
  4. Coin Casino Review 2021 | No Deposit Bonus | Free Play
    We've done 샌즈카지노 a comprehensive review of coin casino 인카지노 and gave it a RTP of 1xbet 97.51%. Read the Casino Review and find out if you can get an RTP of 98.51%.

    ReplyDelete
  5. could you share with me Phase 1 codes please.

    ReplyDelete