All posts by Josh Pieper

Improved low speed step selection

In my original series on balancing while walking, (part 1, part 2, part 3), I described some heuristics I used to handle changing directions. That was minimally sufficient in 1D, however in 2D it still leaves something to be desired, as there are more possible degenerate cases. The biggest is when spinning in place. There, the center of mass doesn’t really move at all relative to the balance line, but we still need to take steps!

Existing state

In that first pass, I relied on limiting the maximum time spent in the stance phase. However, even with that, it was possible to get into situations where the robot would eventually tip over. Here’s a quick clip showing that happen with the maximum stance time artificially increased:

Clearly, something better could be done.

New approach

While I expect to more seriously revamp the step selection method soon to enable gaits with a flight phase, I figured that I could make incremental progress here with a technique that would also be useful going forward. The basic concept is to keep track of how long until any given foot will reach an infeasible position in its configuration space. If that looks to be soon enough, then that leg (well, virtual leg), will be forced to take a step regardless of balance considerations.

The first part of that is determining when a leg position will become infeasible. I chose to make several approximations. When starting up, for each leg, a sampling approach is used to mark out the feasible region in the joint reference frame for the standard height, and for the leg-lifted height. This results in two 2D polygons:

Then, I intersect those two polygons and simplify to find an XY polygon that gives a valid region for the leg hopefully across the entire used Z volume.

At runtime, I use the calculation I made earlier for balancing segment wise, to determine which segment the leg will exit first and at what time. The “virtual leg’s” exit time is just the minimum of its two component legs.

Finally, I extended the logic to determine when a leg should be picked up to incorporate this “invalid time” in addition to the other constraints (it is getting a little unwieldy at this point).

Result

This approach is definitely is much more robust, especially at low speed rotation. Of course, as always, there is still room for improvement, even with this approach, the legs can get into a configuration where the support polygon is very small, and in some cases very large steps are taken. Addressing those will likely require either incorporating knowledge of the support polygon directly or indirectly, or more forward planning than the “step at a time” approach I’ve got now.

Here’s a video where I stress tested the step selection algorithm by aggressively changing requested walking speed and rotation rates, forcing the machine to make sub-optimal steps and recover from them over and over.

High speeds with the moteus controller

Someone contacted me not too long ago who wanted to use the moteus controller, but wasn’t sure if it would be able to hit their target mechanical velocity of 6000rpm. I honestly wasn’t either, so I tested it. After a quick firmware fix, the devkit motor when run at 34V seems to be able to do it no problem.

It should be noted that the current firmware assumes you are within a thousand or so revolutions of 0. You can exceed that pretty quickly running at 120 revolutions per second!

Balancing on estimated terrain

Last time, I described my approach for estimating the terrain under the robot based on the inertial measurement unit and proprioceptive foot feedback. Now, I’ll cover how that is used to balance.

“R” Frame

First, let me explain the “R” or “robot” frame and how it is used. The frames I’ve discussed in this series so far are the “B” frame, which is rigidly attached to the center of the robot body, the “M” frame, which is located at the center of mass and level with the ground, and the “T” frame, which is under the robot and level with the current terrain.

The “R” frame, by contrast, is a purely invented frame that is a rigid transform away from the “B” frame. Its purpose is to allow for (1) the cool looking inverse kinematic demos that everyone seems so fond of, and (2) mostly global transforms, like this implementation of terrain based balancing. All of the gait algorithms operate almost exclusively in the R frame, which means that offsets and rotations applied there will affect the balance of the robot during its normal operation.

Using the R frame to balance

Here the algorithm is relatively straightforward. The center of the T frame is taken, transformed into the M frame and then moved up by the current average leg height. In 2D, that looks like:

Then, the point p_0 as measured in the B frame gives the desired RB transform offset. It is that simple! That formulation keeps the center of mass over the 0, 0 R frame point accounting for offsets in the center of mass and for non-level terrain.

Simulation results

Here’s the robot walking up a relatively steep slope in simulation using the above technique. The purple disc shows the estimated terrain value, while the gray disc shows the gravity normal plane.

Final steps

In the final post for this work, we’ll test it on the real robot!

Estimating terrain slope

Last time I discussed the challenges when operating the mjbots quad A1 on sloped surfaces. While there are a number of possible means of tackling this, the approach I’ve gone with for now is to estimate the slope of the terrain under the robot, and use that to determine how to position the center of mass. Here’ll I’ll cover the estimation part of this solution.

On paper, the quad A1 has plenty of information to estimate the terrain under its feet. Between the IMU with attitude estimator, the proprioceptive feedback from the joints, and the ability to move the feet around, it would be obvious to a human whether the ground under them was sloped or level. The challenge here is to devise an algorithm to do so, despite the noise in the IMU, the fact that the feet are not always on the ground, and that as the robot moves, the terrain under it changes.

Approach

My basic approach can be summarized in the follow flow chart / block diagram:

First, a brief description of the 3 pertinent reference frames:

  • B Frame: The body frame (or B frame), is centered on the robot body, and rigidly fixed to the robot body. The proprioceptive system eventually calculates each of the 4 feet positions in this frame.
  • M Frame: The CoM frame (or M frame), is centered at the robot’s idle center of mass and oriented such that positive Z points along gravity toward the ground with a heading that is arbitrary at start up, but that tracks the robot’s changing heading.
  • T Frame: The terrain frame (or T frame), is referenced to the M frame at the average height of the legs with a slope that aligns with the average slope of the terrain under the robot.

The algorithm works in roughly the following steps:

  1. First, project all the feet positions into the M frame.
  2. For any in-flight legs, reset the Z value to one calculated from the current TM transform and a 0 T frame Z height.
  3. Fit a plane to these “on-ground” M frame points.
  4. Update the slope of the T frame using this plane with an exponential filter along the X and Y axes.

Properties

This algorithm has the benefit that it will converge on the terrain underneath the robot as long as feet touch the ground with regularity, which is a somewhat necessary condition for a robot supported by its legs. The rate at which the estimate converges can be controlled by the filter constant. Selecting that to be the same order as the step frequency does a decent job of rejecting spurious noise while responding in a timely manner to updated terrain.

Next up we’ll see how to use this information to balance, and watch the results in simulation.

Operating on sloped surfaces

Not too long ago, I ran some outdoor experiments, and while piloting the quad A1 around, realized that it wasn’t going to get very far if it was restricted to just flat ground.

Since the control algorithms are completely ignorant of slopes, the center of gravity of the machine can easily get too close to the support polygon when resting, and similarly fails to stay balanced over the support line during the trot gait.

To get started tackling this, I stuck a configurable ramp in the simulator:

And yes, it fails just as much on the ramp as it did in real life.

mjbots Monday: New lower prices

One of my goals with mjbots is to make building dynamic robots more accessible to researchers and enthusiasts everywhere. To make that more of a reality, I’m lowering the prices in a big way on the foundational components of brushless robotic systems, the moteus controller and qdd100 servo.

OldNew
moteus r4.3 controller$119$79
moteus r4.3 devkit$199$159
qdd100 beta$549$429
qdd100 beta devkit$599$469

Don’t worry, if you purchased any of these in the last month, you should be getting a coupon in your email equivalent to the difference.

Happy building!

Measuring the pi3hat r4.2 performance

Last time I covered the new software library that I wrote to help use all the features of the pi3hat, in an efficient manner. This time, I’ll cover how I measured the performance of the result, and talk about how it can be integrated into a robotic control system.

pi3hat r4.2 available at mjbots.com

Test Setup

To check out the timing, I wired up a pi3hat into the quad A1 and used the oscilloscope to probe one of the SPI clocks and CAN bus 1 and 3.

Then, I could use pi3hat_tool incantations to experiment with different bus utilization strategies and get to one with the best performance. The sequence that I settled on was:

  1. Write all outgoing CAN messages, using a round-robin strategy between CAN buses. The SPI bus rate of 10Mhz is faster than the 5Mbps maximum CAN-FD rate, so this gets each bus transmitting its first packet as soon as possible, then queues up the remainder.
  2. Read the IMU. During this phase, any replies over CAN are being enqueued on the individual STM32 processors.
  3. Optionally read CAN replies. If any outgoing packets were marked as expecting a reply, that bus is expected to receive the appropriate number of responses. Additionally, a bus can be requested to “get anything in the queue”.

With this approach, a full command and query of the comprehensive state of 12 qdd100 servos, and reading the IMU takes around 740us. If you perform that on one thread while performing robot control on others, it allows you to achieve a 1kHz update rate.

CAN1 SPI clock on bottom, CAN1 and CAN3 bus on top

These results were with the Raspberry Pi 3b+. On a Raspberry Pi 4, they seem to be about 5% better, mostly because the Pi 4’s faster CPU is able to execute the register twiddling a little faster, which reduces dead time on the SPI bus.

Bringing up the pi3hat r4.2

The pi3hat r4.2, now in the mjbots store, has only minor hardware changes from the r4 and r4.1 versions. What has changed in a bigger way is the firmware, and the software that is available to interface with it. The interface software for the previous versions was tightly coupled to the quad A1s overall codebase, that made it basically impossible to use with without significant rework. So, that rework is what I’ve done with the new libpi3hat library:

It consists of a single C++11 header and source file with no dependencies aside from the standard C++ library and bcm_host.h from the Raspberry Pi firmware. You can build it using the bazel build files, or just copy the source file into your own project and build with whatever system you are using.

Performance

Using all of the pi3hat’s features in a runtime performant way can be challenging, but libpi3hat makes it not so bad by providing an omnibus call which sequences accesses to all the CAN buses and peripherals in a way that maximizes pipelining and overlap between the different operations, while simultaneously maximizing the usage of the SPI bus. The downside is that it does not use the linux kernel drivers for SPI and thus requires root access to run. For most robotic applications, that isn’t a problem, as the controlling computer is doing nothing but control anyways.

This design makes it feasible to operate at least 12 servos and read the IMU at rates over 1kHz on a Raspberry Pi.

pi3hat_tool

There is a command line tool, pi3hat_tool which provides a demonstration of how to use all the features of the library, as well as being a useful diagnostic tool on its own. For instance, it can be used to read the IMU state:

# ./pi3hat_tool --read-att
ATT w=0.999 x=0.013 y=-0.006 z=-0.029  dps=(  0.1, -0.1, -0.1) a=( 0.0, 0.0, 0.0)

And it can be used to write and read from the various CAN buses.

# ./pi3hat_tool --write-can 1,8001,1300,r \
                --write-can 2,8004,1300,r \
                --write-can 3,8007,1300,r
CAN 1,100,2300000400
CAN 2,400,2300000400
CAN 3,700,230000fc00

You can also do those at the same time in a single bus cycle:

# ./pi3hat_tool --read-att --write-can 1,8001,1300,r
CAN 1,100,2300000400
ATT w=0.183 x=0.692 y=0.181 z=-0.674  dps=(  0.1, -0.0,  0.1) a=(-0.0, 0.0,-0.0)

Next steps

Next up I’ll demonstrate my performance testing setup, and what kind of performance you can expect in a typical system.

New product Monday: pi3hat

I’ve now got the last custom board from the quad A1 up in the mjbots store for sale, the mjbots pi3 hat for $129.

This board breaks out 4x 5Mbps CAN-FD ports, 1 low speed CAN port, a 1kHz IMU and a port for a nrf24l01. Despite its name, it works just fine with the Rasbperry Pi 4 in addition to the 3b+ I have tested with mostly to date. I also have a new user-space library for interfacing with it that I will document in some upcoming posts. That library makes it pretty easy to use in a variety of applications.

Finally, as is customary with these boards, I made a video “getting started” guide: