One of the challenges in developing a localization solution for a mobile robot is knowing when you are making things better versus making things worse. We felt this acutely when developing Savage Solder for the Sparkfun AVC event, especially as we were having intermittent GPS issues. Specifically, we are interested in knowing how well we can localize in the absence of GPS, or with poor GPS quality. In this series I’ll describe a new technique we’re using to make the process go a little faster as well as have a better understanding of just how accurate we are.
As a short refresher, the localization software in a mobile robot incorporates measurements from the robot’s sensors, and produces an estimate of where the robot is in one or more reference frames. Each of the sensors has errors that cause them to report non-ideal values. There can be many types of error, and the magnitudes of error can be relatively large. If a solution is good, it will report an accurate position even when the sensors aren’t so good.
The “simple” way
A simple technique is make a change, let the car drive the course, and visually observe how close it was to the path that was planned. Lather, rinse, and repeat. Once you get into it, this technique breaks down rapidly:
- “How good is it” is very subjective: As the car drives an entire course, it can be hard to remember exactly how close it was at every point. You also can’t get much in the way of numerical results, just a gut feel.
- Visual observation incorporates many more error sources than localization: Trajectory tracking errors, as well as remote sensor positioning errors (if the vehicle is avoiding observed obstacles) can all cause the robot to drive not where you expect, but still know accurately where it is.
- Time: One of the easiest ways to waste a lot of time developing a robot is to drive it in real life after any software change. This causes development to slow to a crawl.
- Recordkeeping: Somewhat a corollary of the first item, once you have tried a few techniques, and a few constant tunings for each technique, unless your recordkeeping is meticulous, you’ll rapidly lose track of how each variant performed.
- Canceling errors: One common variant is to measure where the vehicle stops at the end of a run. Many sources of error can cancel each other out going in opposite directions, so sometimes this can over-state the actual accuracy of a localization solution mid-run.
It isn’t hard to see how the “simple” technique for improving a localization isn’t so simple after all.
What would be better?
Any solution you may want to use should have the following properties:
- Operate on recorded data: By working on purely recorded data, this means you can iterate rapidly, and even tune constants in a script if need be.
- Consistent metric: A solution should provide a hard number or set of numbers that describe the quality of the localization solution.
- Onboard sensors only: While it is possible to use an auxiliary, higher quality positioning solution than could otherwise be placed on the robot, it is preferable if only the onboard sensors are required.
Next steps…
In the next post, I’ll begin developing the technique we’re using to measure localization accuracy and how it measures up to these properties.