Archives: Robots

pygazebo 3.0.0-2014.1

Yesterday I pushed the release of the newest version of pygazebo to github, 3.0.0-2014.1. This version has two big changes, the first is that I’ve updated all the message bindings for gazebo 3.0.0, which gets two new messages, CameraCmd and SphericalCoordinates. The second, and larger change, is a switch from eventlet to trollius for asynchronous coroutine operation.

trollius is an implementation of the python3 asyncio/tulip library for python2. It is largely API compatible, with the biggest exception being that as a library, trollius can’t support the new python3 “yield from” syntax. pygazebo internally uses an API compatible subset.

The biggest reasons to switch were:

  • Explicit control flow: In the asyncio model, the only ways of yielding control are by returning from a function, or “yield"ing from a coroutine. This means that you can reason locally about whether synchronization is appropriate, even when calling other methods.
  • Integration with other event loops: While eventlet was nominally capable of being integrated with other event loops, in practice it wasn’t very easy and no one did it. The best integration strategy was usually to poll at a high frequency in another event loop and pump the eventlet loop. asyncio already has a draft event loop to interoperate with GLib (gbulb), and writing a basic one to interoperate with QT was not challenging.
  • Unit test coverage: Eventlet (and other greenlet based systems) had a long-standing issue where most python unit test systems could not accurately track test coverage. trollius and asyncio seem to work just fine here.

The new release is on github and pypi, so a newer pygazebo is only a “pip install pygazebo” away.

Savage Solder AVC 2014 Results

AVC 2014 has come and gone! Savage Solder placed 2nd in the doping class this year, with two of three successful runs. The first run ended after about 5 feet when the replacement ESC seemed to overheat after being left on for too long before the starting line. The second run was flawless, hitting the hoop and the ramp. The third run was nearly perfect, only missing the hoop.

Congrats to all the other teams, there were a lot of successful and inventive entries this year!

I’ve found two good videos of the third run so far, the first from the official livestream at time 12:00

https://www.youtube.com/watch?time_continue=720&v=l2fHt7VxhlE

And the second from hearthdragon:

The Mech Moves!

This video shows some gaits for our Mech Warfare entrant, tentatively named “Super Mega Microbot”. First there is a slow statically stable gait, then a faster two leg at a time gait.

Technically, it has been walking for a couple of weeks, but this is the first time with the actual aluminum machined chassis plates, so that it makes for a nice video. All the previous testing has been done with some polycarbonate plates that I cut out by hand. We haven’t mounted the onboard computer or turret yet, but that is coming soon. In their place is an extra lipo battery to make the center of gravity more reasonable. This gait was driven by a PC over the serial cable seen in the frame.

Mech progress...

A few pictures from the in-progress mech build…

Aluminum chassis plates

Aluminum chassis plates

 

Lower chassis with some boards populated

Lower chassis with some boards populated

 

Mockup assembly of chassis minus turret and computer

Mockup assembly of chassis minus turret and computer

Updated baseboard for Savage Solder

One of the improvements we made for this year’s Savage Solder entry into the Sparkfun AVC (and maybe other competitions, I’m looking at you Vecna Robot Race), is an updated baseboard. Our old baseboard was a Teensy 2.0+ soldered onto a hand-built protoboard and then wired up to an external IMU mounted on the front of the chassis. For fun, I decided to replace that board with a more integrated unit which installs more cleanly into the car chassis and is based on the firmware I’m developing for the ARR.

Hardware

The board hardware includes:

  • 3 servo inputs: The Savage Flux RX receiver has 3 outputs, for throttle, brake, and an auxiliary channel.
  • 4 servo outputs: We are only using 2 channels currently, but it was trivial to wire up 2 more.
  • Pins and mounting for stacked MiniIMUv2: Rather than have the IMU need a long-run cable and dedicated mounting, it is now integrated into the baseboard. The I2C can be run at a 400kHz in this configuration, and thus the IMU can be sampled at a higher rate if necessary.
  • USB device port for connection to host PC: We use a small form factor Intel motherboard as a host computer, the USB connection, while not particularly reliable, is sufficient for the competitions Savage Solder enters.
  • Sensored motor feedback for odometry: This provides the primary odometry for the car.
  • General purpose digital inputs for emergency switch, and bumper switches: Our emergency override switch and robomagellan bump switches can all be treated as simple logic inputs.
  • Battery voltage sense: One ADC channel was wired to a connector to measure the drive battery voltage.
oshpark.com rendering

oshpark.com rendering

This was another oshpark.com job, and the layout was relatively straightforward. The biggest wrinkle was that the first revision was non-functional due to a broken AVR eagle package. Who would figure that the no-name library I downloaded from some shady site didn’t have a ground pad? The crazier thing was that it kind of worked, despite all the vias that it placed under the part were shorted out by the pad.

Baseboard fully populated

Baseboard fully populated

The board was mechanically designed to fit into an otherwise unused space forward of the steering servos on our Savage Flux chassis. There are two screws there that attach the chassis to some suspension components on the underside. I just replaced those two screws with slightly longer ones, and added a spacer underneath.

Baseboard installed in Savage Solder

Baseboard installed in Savage Solder

Firmware

As I mentioned before, the firmware was based on that I am creating for the ARR. The basic architecture is the same, in that a master computer communicates with the baseboard over USB, and the baseboard provides high rate servo, I2C communication, and other miscellaneous functionality. I’ve written up the basic servo functionality and engineering test fixture of the firmware previously. Versus our previous controller, the servo control is much improved, both in the precision and accuracy of input and output, as well as having access to more channels. While the firmware supports 8 input and 8 output channels, the Savage Solder baseboard only has 3 inputs and 4 outputs routed to any connectors.

I added a few new pieces of functionality to make the firmware usable for Savage solder, namely:

  • Configurable Emergency Passthrough / Override: The controller can be configured with two very short functional scripts which can be used to evaluate various servo and GPIO conditions. One determines whether all servos should be placed in the pass through configuration, and the other determines if computer control should be allowed. These allow both for the Savage Solder emergency safety scheme, as well as any I will use on ARR in the future.
  • Encoder operation: Savage Solder uses the sense leads from our sensored brushless motor as an encoder. The firmware measures these to report total distance traveled. Versus the old firmware, it now counts in true quadrature, giving 4 times the resolution.
  • GPIO operation: The board exposes control and sensing of a few general purpose input and output pins. They are used to control the LEDs on the board, sense the onboard emergency override button, and sense the bump switches we use for robomagellan style competitions.

Testing

The baseboard has been installed in the car since late July 2013. Yes, this post is very belated! In that time, it has been used for all of our testing and appears to be working quite well. Or rather, it hasn’t been the source of any problems yet!

Savage Solder: Measuring Localization Accuracy Part 3

In Part 1 and Part 2, I described why we’re trying to measure localization accuracy, and the properties of a GPS receiver that allows us to do so. In this post, I’ll describe the technique we used to measure accuracy of our solution purely from recorded data, without needing to go back out to the field every time a change was made.

Our solution

The technique we used to measure localization accuracy is somewhat similar to the Allan Variance plots used in part 2. Here, we take a large corpus of pre-recorded sensor data from the vehicle and re-run it through the localization solution. The trick is that for a given time window, the GPS updates are witheld from the filter, then at the end of the window, the difference between the estimated position and the measured GPS position is recorded. The cycle then starts anew at the current time, with the estimate being reset to the best possible one, and GPS denied until the next window end. Each sampled error is one data point showing how far off the localization solution can be after that much time with no GPS.

We expect this to be effective because, as the plots in part 2 showed, over short time windows, the average drift in the GPS is actually pretty small. For instance, the u-blox 6 on savage solder, within a 5s time window, will have drifted only about 0.6m with 95% confidence.

Once the results have been collated for a given time window, say 1s, we repeat the entire process for 2s, then 3s, etc. The curves this produces show how rapidly the position error in localization grows with time. The lower the value is at longer time intervals, that means the vehicle is more robust to GPS outages or drifts.

Results on Savage Solder’s 2013 AVC software

A plot of our 2013 AVC localization solution’s accuracy is shown below. It was measured over about 30 minutes of autonomous driving, mostly recorded in the weeks leading up to the 2013 AVC. I have superimposed on it the 68% and 95% confidence in the u-blox drift for reference. If the localization solution were perfect, we would expect the measured errors to approximately line up with the GPS drift over the same time interval.

Savage Solder AVC 2013 Localization Accuracy

Savage Solder AVC 2013 Localization Accuracy

This shows that the accuracy isn’t terrible, but isn’t particularly great either. After 15 seconds, it is off by less than 2m two thirds of the time. However, in order to capture the best 95% of results, we have to look all the way out to 7.5m, which clearly isn’t too usable. For a course like the Sparkfun AVC one, you can roughly say that errors larger than 2 or 3 meters will result in a collision with something. This implies that Savage Solder can run for about 3 to 5 seconds with no GPS and be not terrible.

We have a couple of theories for where the largest sources of error are in the system as shown in the above plot:

  • Initial heading error: For all of these data sets, the car has only a very rough knowledge of its heading when starting out and all information about the heading comes from GPS. Even a small initial heading error will result in large position errors early in each run.
  • Total state filter: As described before the localization solution used during 2013 was a total state Kalman filter. I expect that switching to an error state can improve the performance.
  • Improved inertial sensors: This can’t strictly be tested after the fact, but there now exist easily obtainable higher quality gyroscopes and accelerometers than the Pololu MiniIMU v2 we used in 2013.

Recap of measuring localization accuracy

Looking back at part 1, this technique measures up pretty well. It:

  1. requires only data recorded on the robot, it
  2. provides hard numeric results (within the limits of the GPS’s short term drift), and it
  3. requires no additional sensors

You can tweak the localization algorithms in software as many times as necessary, each time accurately assessing the results, and never once need to go out and actually drive the robot around.

pygazebo - First release

I managed to clean up the python bindings to Gazebo I wrote when testing mech gaits in simulation, and have released them publicly as pygazebo.

All you have to do is:

pip install pygazebo

From the README, a simple example:

import eventlet
from pygazebo import Manager

manager = Manager()
publisher = manager.advertise('/gazebo/default/model/joint_cmd',
                              'gazebo.msgs.JointCmd')

message = pygazebo.msg.joint_cmd_pb2.JointCmd()
message.axis = 0
message.force = 1.0

while True:
    publisher.publish(message)
    eventlet.sleep(1.0)

and you’re off and ready to code!

Savage Solder: Measuring Localization Accuracy Part 2

In Part 1, I discussed how measuring the accuracy of a localization solution in a mobile robot is challenging, and some properties an ideal solution would have. This time, I’ll describe some of the properties of the GPS receiver Savage Solder uses, to motivate our mechanism for using it to measure localization accuracy.

Principles

The basic idea behind our approach is that the GPS mounted on Savage Solder, while relatively inaccurate in general, rarely has a very large error. And even when the error is large, it is usually only for a short window of time. Over time, these periods where the GPS has a lot of error come and go semi-randomly, which means that with enough data, they will tend to average out. To see how this works in a little more detail, let’s talk about the major sources of error that a GPS receiver can have.

NASA rendering of GPS satellite

NASA rendering of GPS satellite

Geometry and clock error: At any given instant, only a subset of the GPS satellites are visible to a receiver, and those that are visible will have a configuration which introduces a source of error due to the process of triangulation. For instance, if all the visible satellites are in the same part of the sky, measuring ranges to the satellites will not tell you much about your absolute position. Secondly, each satellite may have differing errors in their onboard clocks, each of which translates directly in range measurement errors. Both of these error sources change relatively slowly with time.

Ephemeris and atmospheric effects: To estimate its location, a receiver must have precise knowledge of each satellite’s orbit, or ephemeris. While this orbit is known relatively precisely, every centimeter of error directly corresponds to positioning error on the ground. Ephemeris errors typically change slowly over time, as space weather isn’t as drastic as Boston weather. Atmospheric effects have similar properties when visible from the receiver, the ionosphere is the primary factor, as it causes delays in the signals propagating from the satellites to each receiver. Its effects also change relatively slowly with time.

Multipath and obstructions: When the line of sight to a receiver is blocked by a tree, building, person, vehicle, or the horizon, that can cause the signal to weaken enough to mis-register. The receiver can also pick up reflections of the actual satellite signal from any of the above. These reflected signals are called “multi-path”, and they cause the receiver to measure the additional length in the reflected path, instead of the true shortest path. As new satellites become available or are hidden, they can join or fall out of the solution. These errors can change rapidly for ground vehicles, where the line of sight to satellites can rapidly become clear or obstructed as it moves around.

Noise: Each measurement has some amount of random noise associated with it. Consumer receivers typically only measure the code phase, and not the carrier phase, so this measurement noise can be on the order of a meter or so for each satellite. It has mostly high frequency components.

Filtering: In order for the output to look more “reasonable”, most low-cost consumer receivers implement some sort of state estimation filtering before emitting any outputs. This smooths out noise components, and also smooths out rapid changes in multi-path or which satellites are used in the solution. As a result, the final position can often seem smooth, but as a result has more absolute error at any given instant.

Windowed error measurement

To get an idea of the magnitude of each of these error types, we used a technique similar to Allan Variance to see the magnitude of error from the GPS solution in differing time domains. A long recording of reported GPS positions is made while the receiver is stationary. Then, it is divided up first into say consecutive 0.2s windows. Within each window the position is averaged, after which the change between consecutive windows is measured. These deltas represent how much the receiver’s absolute offset has drifted in that time period. For the 0.2s size, you can then see how much the offset changes on average, or how much it changes 95% of the time.

Once you’ve done that for the first window size, you increase the window size, say to 0.3s and repeat the whole process. You keep increasing the window size until you can only fit a few bins into the recorded trace.

What we expect to see is something like the following:

Typical GPS relative error plot

Typical GPS relative error plot

At very high frequencies (short time intervals), the filtering on the receiver renders the errors small. This means that on average, the position doesn’t change much over short intervals. Then, as the time interval gets up into the 5 to 60 minutes range, the error rapidly increases as we see the effects of atmospheric, ephemeris, and multipath errors become realized. Eventually, the error will peak, at a time interval which depends upon what the worst error contributor is. Finally, as the time grows to infinity, we would expect to see the error drop off, as averaging over such large time intervals tends to reveal the zero-drift property of GPS.

We ran this experiment on the u-blox 6 GPS used on Savage Solder and a high quality dual frequency receiver outfitted with Omnistar G2 as a reference. The u-blox was very crudely weatherized for long term outdoor recording with a disposable tupperware container. A recording at full data rate for each GPS was made over about 16 days of operation. Each GPS’s plot shows the median error and the maximum expected error for differing probabilities, which equate to about 1, 2, and 3 sigma on a normal distribution. (The non-weatherized u-blox was tested over a shorter duration and appeared to produce equivalent results.)

Time stability of u-blox 6 with WAAS versus Omnistar G2.

Time stability of u-blox 6 with WAAS versus Omnistar G2.

Analysis

The data was taken while stationary on a rooftop with clear 360 degree view of the sky, and thus has best case visibility. Results on an AVC style course will be worse, since multi-path and obstructions will be constantly changing. Despite that, we can get some lower bounds on how good the system could possibly be from these results.

For instance, for a time commensurate with a Sparkfun AVC course run (about 45 seconds for a fast vehicle), the u-blox can be expected to drift around 2.2 meters with 95% confidence. The maximum drift over any interval with 95% confidence is around 3.3m, which implies that it is dicey to survey the course in ahead of time and expect the measurements to be useful. Also, the time required before averaging measurements actually starts to improve stability is pretty long. For the u-blox, it is around 1 hour, and even after looking at an entire day, the stability only gets down to around 24cm.

It is important to note that while the u-blox reports a GPS accuracy metric at any given time, it is usually extremely optimistic. For most of the above trace, the accuracy was reported as about 0.5m with a 1 sigma probability, when the measured absolute 1 sigma accuracy was clearly around 2m or more.

As a reference, the Omnistar G2 trace shows that yes, its performance is about 2 orders of magnitude better than the low-cost u-blox receiver. In these near-ideal conditions, it has a 95% confident maximum error of around 12cm, which means that it could be viable for hitting the hoop and ramp. However, as this is in ideal conditions, shading and multipath from the course, spectators, and other vehicles will certainly make actual results even worse.

Using this

In the next post, I’ll show how we used this knowledge of our GPS receiver’s error properties to measure the quality of our localization solution over short to medium time intervals.

PyPose + Gazebo for Mech Warfare

Mikhail and I are considering fielding an entry into Mech Warfare this season. To evaluate different geometries and servo models, I tried setting up a simple simulation environment which would let us experiment without having to have a large variety of hardware on hand. While we’re not finished, I have a minimal first proof of concept working now, which I’ll describe briefly.

Immortal, competing in Mech Warfare at the KC Maker Faire

Immortal, competing in Mech Warfare at the KC Maker Faire

Components

The simulator I’m using is Gazebo. It integrates several different rigid body physics engines, a 3D visualization environment, and a relatively simple file format for describing the configuration of robots. It uses a client server publish-subscribe model, where a central server maintains the physics simulation and any number of clients can connect to control or monitor individual models.

For gait generation, I’m starting with PyPose, which is a pose sequencer and inverse kinematics engine for the arbotiX controller, an open source Arduino compatible controller for Dynamixel servos. Specifically, the NUKE, or Nearly Universal Kinematics Engine, contains routines for generating a couple of different gait patterns for walking robots with lizard style legs.

Modifications

The nominal workflow I wanted was to operate PyPose on a synthetic robot, simulated by Gazebo. I had to add a couple of pieces of software to make that happen.

To start, Gazebo doesn’t really have a documented protocol for interacting with its publish subscribe network. The primary way clients use it currently is through ROS. To make this work, I wrote up a simple python client library which implements the publish subscribe protocol using eventlet. This allows python applications to both subscribe to topics, as well as publish them.

Next, PyPose is very specialized to the Dynamixel servos and the arbotiX controller. Additionaly, from a gait generation perspective, it is effectively composed of effectively two independent parts. The first is an ahead of time pose sequencer and configuration tool written in python. The second, is a generated Arduino sketch which implements the actual gait when run on an AVR controller. The former, I hacked in a simple abstraction layer and connected it to the python Gazebo library. For the latter, I actually ported the generated C code back into python in order to test the generated gaits in simulation.

Results

At the moment, I have only a rough proof of concept… the code is hacky, I haven’t yet simulated the physical characteristics of any particular servo, the physics model doesn’t seem quite right yet, and the Gazebo model consists of nothing but jointed rectangles with no textures. Despite that, in the video below, you can still see both PyPose manipulating the model, and the python gait generator operating it.

 

Next up, I’ll be trying to polish off the rough edges, and then try to evaluate the robot configuration variants I actually wanted to validate.