Archives: 2014

First Firing Session

To act as a forcing function in our development of Super Mega Microbot, we put together a Mech Warfare style qualification video, showing it driving around for more than 5 minutes shooting at some targets. Here it is!

legtool now on github

legtool is the library and graphical application I’ve developed for inverse kinematics and gait generation to use with Super Mega Microbot. I’ve pushed a public version of it to github, in the hopes that it may be useful for other developers of legged robots.

I’ve also put together a short demonstration video, showing how to download, install, and use legtool with a gazebo simulated quadruped.

 

In lieu of actual documentation (or possibly to start generating the text of some), I’ve written some text describing what legtool does and how to to use it below.

legtool overview and installation

legtool is a python library and PySide based graphical application useful for pose generation, inverse kinematics, and gait generation for legged robots. It consists of a set of python libraries, and a graphical debugging application which uses those libraries to allow a developer to explore legged locomotion. It has primarily been tested on Ubuntu linux, but may be portable to other operating systems.

legtool is not yet packaged for installation, so all you need to do is clone the github repository and install the dependencies as documented in the README.

git clone https://github.com/jpieper/legtool.git

Legtool by default creates a configuration file in ~/.config, however, you can specify an alternate configuration on the command line using the -c option. If you have multiple robots, or a simulation and an actual robot, you will want to keep multiple configurations around.

Servo tab

The first tab which is active, the “servo” tab, contains controls to select the servos to control and configure them, along with controls to maintain a list of named poses, where each pose consists of a set of servo positions.

legtool servo tab

legtool servo tab

 

Here you can move servos individually to verify their operation, and compose poses. The only use for poses currently is in the later inverse kinematics tabs, so there is no need to define more than idle, minimum, and maximum.

Inverse kinematics tab

The second tab, the IK tab, lets you configure which servos are in which leg, what sign they have, and also the geometry of the legs. The lower right corner visualizes the achievable region of operation in 2D. You can select alternate projections, and change the centerpoint of the test using the IK Offset group. Clicking on the rendering will command that leg to that position.

legtool IK tab

legtool IK tab

Gait tab

The third, and currently final tab, contains controls to allow configuring the placement of all shoulder joints, marking the center of gravity, and configuring the gait engine. It provides several debugging facilities to allow visualizing the results of the gait engine.

legtool gait tab

legtool gait tab

The geometry configuration is in the upper left. The shoulder position can be set individually for each leg. The center of gravity and idle position are common to all legs. The gait configuration area in the lower left selects which gait to use, and then exposes a number of configuration options for that gait.

The remainder of the tab is devoted to interactive tools for debugging the gaits:

  • Gait graph: The gait graph in the upper middle shows for the duration of a single cycle, when each of the legs is in the air.
  • Geometry view: Visible in the upper right, this shows a 2D rendering of where the shoulders, legs, body, center of gravity, and support polygon are located.
  • Graphical command view: Visible in the lower right, the graphical command view shows which commands are feasible given the current geometry and gait configuration. Any two command axes can be selected, after which the feasible region is shown in green, regions where the speed is limited are in yellow, and totally infeasible regions are in red.
  • Playback: The playback scrubber allows you to slide back and forward through a gait cycle, visualizing the results in the geometry view or on the simulated or real robot.
  • Textual command area: All of the 9 degrees of command freedom are settable here, in addition to the two available in the graphical command view.

pygazebo 3.0.0-2014.1

Yesterday I pushed the release of the newest version of pygazebo to github, 3.0.0-2014.1. This version has two big changes, the first is that I’ve updated all the message bindings for gazebo 3.0.0, which gets two new messages, CameraCmd and SphericalCoordinates. The second, and larger change, is a switch from eventlet to trollius for asynchronous coroutine operation.

trollius is an implementation of the python3 asyncio/tulip library for python2. It is largely API compatible, with the biggest exception being that as a library, trollius can’t support the new python3 “yield from” syntax. pygazebo internally uses an API compatible subset.

The biggest reasons to switch were:

  • Explicit control flow: In the asyncio model, the only ways of yielding control are by returning from a function, or “yield"ing from a coroutine. This means that you can reason locally about whether synchronization is appropriate, even when calling other methods.
  • Integration with other event loops: While eventlet was nominally capable of being integrated with other event loops, in practice it wasn’t very easy and no one did it. The best integration strategy was usually to poll at a high frequency in another event loop and pump the eventlet loop. asyncio already has a draft event loop to interoperate with GLib (gbulb), and writing a basic one to interoperate with QT was not challenging.
  • Unit test coverage: Eventlet (and other greenlet based systems) had a long-standing issue where most python unit test systems could not accurately track test coverage. trollius and asyncio seem to work just fine here.

The new release is on github and pypi, so a newer pygazebo is only a “pip install pygazebo” away.

Savage Solder AVC 2014 Results

AVC 2014 has come and gone! Savage Solder placed 2nd in the doping class this year, with two of three successful runs. The first run ended after about 5 feet when the replacement ESC seemed to overheat after being left on for too long before the starting line. The second run was flawless, hitting the hoop and the ramp. The third run was nearly perfect, only missing the hoop.

Congrats to all the other teams, there were a lot of successful and inventive entries this year!

I’ve found two good videos of the third run so far, the first from the official livestream at time 12:00

https://www.youtube.com/watch?time_continue=720&v=l2fHt7VxhlE

And the second from hearthdragon:

The Mech Moves!

This video shows some gaits for our Mech Warfare entrant, tentatively named “Super Mega Microbot”. First there is a slow statically stable gait, then a faster two leg at a time gait.

Technically, it has been walking for a couple of weeks, but this is the first time with the actual aluminum machined chassis plates, so that it makes for a nice video. All the previous testing has been done with some polycarbonate plates that I cut out by hand. We haven’t mounted the onboard computer or turret yet, but that is coming soon. In their place is an extra lipo battery to make the center of gravity more reasonable. This gait was driven by a PC over the serial cable seen in the frame.

Mech progress...

A few pictures from the in-progress mech build…

Aluminum chassis plates

Aluminum chassis plates

 

Lower chassis with some boards populated

Lower chassis with some boards populated

 

Mockup assembly of chassis minus turret and computer

Mockup assembly of chassis minus turret and computer

Updated baseboard for Savage Solder

One of the improvements we made for this year’s Savage Solder entry into the Sparkfun AVC (and maybe other competitions, I’m looking at you Vecna Robot Race), is an updated baseboard. Our old baseboard was a Teensy 2.0+ soldered onto a hand-built protoboard and then wired up to an external IMU mounted on the front of the chassis. For fun, I decided to replace that board with a more integrated unit which installs more cleanly into the car chassis and is based on the firmware I’m developing for the ARR.

Hardware

The board hardware includes:

  • 3 servo inputs: The Savage Flux RX receiver has 3 outputs, for throttle, brake, and an auxiliary channel.
  • 4 servo outputs: We are only using 2 channels currently, but it was trivial to wire up 2 more.
  • Pins and mounting for stacked MiniIMUv2: Rather than have the IMU need a long-run cable and dedicated mounting, it is now integrated into the baseboard. The I2C can be run at a 400kHz in this configuration, and thus the IMU can be sampled at a higher rate if necessary.
  • USB device port for connection to host PC: We use a small form factor Intel motherboard as a host computer, the USB connection, while not particularly reliable, is sufficient for the competitions Savage Solder enters.
  • Sensored motor feedback for odometry: This provides the primary odometry for the car.
  • General purpose digital inputs for emergency switch, and bumper switches: Our emergency override switch and robomagellan bump switches can all be treated as simple logic inputs.
  • Battery voltage sense: One ADC channel was wired to a connector to measure the drive battery voltage.
oshpark.com rendering

oshpark.com rendering

This was another oshpark.com job, and the layout was relatively straightforward. The biggest wrinkle was that the first revision was non-functional due to a broken AVR eagle package. Who would figure that the no-name library I downloaded from some shady site didn’t have a ground pad? The crazier thing was that it kind of worked, despite all the vias that it placed under the part were shorted out by the pad.

Baseboard fully populated

Baseboard fully populated

The board was mechanically designed to fit into an otherwise unused space forward of the steering servos on our Savage Flux chassis. There are two screws there that attach the chassis to some suspension components on the underside. I just replaced those two screws with slightly longer ones, and added a spacer underneath.

Baseboard installed in Savage Solder

Baseboard installed in Savage Solder

Firmware

As I mentioned before, the firmware was based on that I am creating for the ARR. The basic architecture is the same, in that a master computer communicates with the baseboard over USB, and the baseboard provides high rate servo, I2C communication, and other miscellaneous functionality. I’ve written up the basic servo functionality and engineering test fixture of the firmware previously. Versus our previous controller, the servo control is much improved, both in the precision and accuracy of input and output, as well as having access to more channels. While the firmware supports 8 input and 8 output channels, the Savage Solder baseboard only has 3 inputs and 4 outputs routed to any connectors.

I added a few new pieces of functionality to make the firmware usable for Savage solder, namely:

  • Configurable Emergency Passthrough / Override: The controller can be configured with two very short functional scripts which can be used to evaluate various servo and GPIO conditions. One determines whether all servos should be placed in the pass through configuration, and the other determines if computer control should be allowed. These allow both for the Savage Solder emergency safety scheme, as well as any I will use on ARR in the future.
  • Encoder operation: Savage Solder uses the sense leads from our sensored brushless motor as an encoder. The firmware measures these to report total distance traveled. Versus the old firmware, it now counts in true quadrature, giving 4 times the resolution.
  • GPIO operation: The board exposes control and sensing of a few general purpose input and output pins. They are used to control the LEDs on the board, sense the onboard emergency override button, and sense the bump switches we use for robomagellan style competitions.

Testing

The baseboard has been installed in the car since late July 2013. Yes, this post is very belated! In that time, it has been used for all of our testing and appears to be working quite well. Or rather, it hasn’t been the source of any problems yet!

Savage Solder: Measuring Localization Accuracy Part 3

In Part 1 and Part 2, I described why we’re trying to measure localization accuracy, and the properties of a GPS receiver that allows us to do so. In this post, I’ll describe the technique we used to measure accuracy of our solution purely from recorded data, without needing to go back out to the field every time a change was made.

Our solution

The technique we used to measure localization accuracy is somewhat similar to the Allan Variance plots used in part 2. Here, we take a large corpus of pre-recorded sensor data from the vehicle and re-run it through the localization solution. The trick is that for a given time window, the GPS updates are witheld from the filter, then at the end of the window, the difference between the estimated position and the measured GPS position is recorded. The cycle then starts anew at the current time, with the estimate being reset to the best possible one, and GPS denied until the next window end. Each sampled error is one data point showing how far off the localization solution can be after that much time with no GPS.

We expect this to be effective because, as the plots in part 2 showed, over short time windows, the average drift in the GPS is actually pretty small. For instance, the u-blox 6 on savage solder, within a 5s time window, will have drifted only about 0.6m with 95% confidence.

Once the results have been collated for a given time window, say 1s, we repeat the entire process for 2s, then 3s, etc. The curves this produces show how rapidly the position error in localization grows with time. The lower the value is at longer time intervals, that means the vehicle is more robust to GPS outages or drifts.

Results on Savage Solder’s 2013 AVC software

A plot of our 2013 AVC localization solution’s accuracy is shown below. It was measured over about 30 minutes of autonomous driving, mostly recorded in the weeks leading up to the 2013 AVC. I have superimposed on it the 68% and 95% confidence in the u-blox drift for reference. If the localization solution were perfect, we would expect the measured errors to approximately line up with the GPS drift over the same time interval.

Savage Solder AVC 2013 Localization Accuracy

Savage Solder AVC 2013 Localization Accuracy

This shows that the accuracy isn’t terrible, but isn’t particularly great either. After 15 seconds, it is off by less than 2m two thirds of the time. However, in order to capture the best 95% of results, we have to look all the way out to 7.5m, which clearly isn’t too usable. For a course like the Sparkfun AVC one, you can roughly say that errors larger than 2 or 3 meters will result in a collision with something. This implies that Savage Solder can run for about 3 to 5 seconds with no GPS and be not terrible.

We have a couple of theories for where the largest sources of error are in the system as shown in the above plot:

  • Initial heading error: For all of these data sets, the car has only a very rough knowledge of its heading when starting out and all information about the heading comes from GPS. Even a small initial heading error will result in large position errors early in each run.
  • Total state filter: As described before the localization solution used during 2013 was a total state Kalman filter. I expect that switching to an error state can improve the performance.
  • Improved inertial sensors: This can’t strictly be tested after the fact, but there now exist easily obtainable higher quality gyroscopes and accelerometers than the Pololu MiniIMU v2 we used in 2013.

Recap of measuring localization accuracy

Looking back at part 1, this technique measures up pretty well. It:

  1. requires only data recorded on the robot, it
  2. provides hard numeric results (within the limits of the GPS’s short term drift), and it
  3. requires no additional sensors

You can tweak the localization algorithms in software as many times as necessary, each time accurately assessing the results, and never once need to go out and actually drive the robot around.

pygazebo - First release

I managed to clean up the python bindings to Gazebo I wrote when testing mech gaits in simulation, and have released them publicly as pygazebo.

All you have to do is:

pip install pygazebo

From the README, a simple example:

import eventlet
from pygazebo import Manager

manager = Manager()
publisher = manager.advertise('/gazebo/default/model/joint_cmd',
                              'gazebo.msgs.JointCmd')

message = pygazebo.msg.joint_cmd_pb2.JointCmd()
message.axis = 0
message.force = 1.0

while True:
    publisher.publish(message)
    eventlet.sleep(1.0)

and you’re off and ready to code!