Category Archives: robots

Balancing on estimated terrain

Last time, I described my approach for estimating the terrain under the robot based on the inertial measurement unit and proprioceptive foot feedback. Now, I’ll cover how that is used to balance.

“R” Frame

First, let me explain the “R” or “robot” frame and how it is used. The frames I’ve discussed in this series so far are the “B” frame, which is rigidly attached to the center of the robot body, the “M” frame, which is located at the center of mass and level with the ground, and the “T” frame, which is under the robot and level with the current terrain.

The “R” frame, by contrast, is a purely invented frame that is a rigid transform away from the “B” frame. Its purpose is to allow for (1) the cool looking inverse kinematic demos that everyone seems so fond of, and (2) mostly global transforms, like this implementation of terrain based balancing. All of the gait algorithms operate almost exclusively in the R frame, which means that offsets and rotations applied there will affect the balance of the robot during its normal operation.

Using the R frame to balance

Here the algorithm is relatively straightforward. The center of the T frame is taken, transformed into the M frame and then moved up by the current average leg height. In 2D, that looks like:

Then, the point p_0 as measured in the B frame gives the desired RB transform offset. It is that simple! That formulation keeps the center of mass over the 0, 0 R frame point accounting for offsets in the center of mass and for non-level terrain.

Simulation results

Here’s the robot walking up a relatively steep slope in simulation using the above technique. The purple disc shows the estimated terrain value, while the gray disc shows the gravity normal plane.

Final steps

In the final post for this work, we’ll test it on the real robot!

Estimating terrain slope

Last time I discussed the challenges when operating the mjbots quad A1 on sloped surfaces. While there are a number of possible means of tackling this, the approach I’ve gone with for now is to estimate the slope of the terrain under the robot, and use that to determine how to position the center of mass. Here’ll I’ll cover the estimation part of this solution.

On paper, the quad A1 has plenty of information to estimate the terrain under its feet. Between the IMU with attitude estimator, the proprioceptive feedback from the joints, and the ability to move the feet around, it would be obvious to a human whether the ground under them was sloped or level. The challenge here is to devise an algorithm to do so, despite the noise in the IMU, the fact that the feet are not always on the ground, and that as the robot moves, the terrain under it changes.

Approach

My basic approach can be summarized in the follow flow chart / block diagram:

First, a brief description of the 3 pertinent reference frames:

  • B Frame: The body frame (or B frame), is centered on the robot body, and rigidly fixed to the robot body. The proprioceptive system eventually calculates each of the 4 feet positions in this frame.
  • M Frame: The CoM frame (or M frame), is centered at the robot’s idle center of mass and oriented such that positive Z points along gravity toward the ground with a heading that is arbitrary at start up, but that tracks the robot’s changing heading.
  • T Frame: The terrain frame (or T frame), is referenced to the M frame at the average height of the legs with a slope that aligns with the average slope of the terrain under the robot.

The algorithm works in roughly the following steps:

  1. First, project all the feet positions into the M frame.
  2. For any in-flight legs, reset the Z value to one calculated from the current TM transform and a 0 T frame Z height.
  3. Fit a plane to these “on-ground” M frame points.
  4. Update the slope of the T frame using this plane with an exponential filter along the X and Y axes.

Properties

This algorithm has the benefit that it will converge on the terrain underneath the robot as long as feet touch the ground with regularity, which is a somewhat necessary condition for a robot supported by its legs. The rate at which the estimate converges can be controlled by the filter constant. Selecting that to be the same order as the step frequency does a decent job of rejecting spurious noise while responding in a timely manner to updated terrain.

Next up we’ll see how to use this information to balance, and watch the results in simulation.

Operating on sloped surfaces

Not too long ago, I ran some outdoor experiments, and while piloting the quad A1 around, realized that it wasn’t going to get very far if it was restricted to just flat ground.

Since the control algorithms are completely ignorant of slopes, the center of gravity of the machine can easily get too close to the support polygon when resting, and similarly fails to stay balanced over the support line during the trot gait.

To get started tackling this, I stuck a configurable ramp in the simulator:

And yes, it fails just as much on the ramp as it did in real life.

mjbots Monday: New lower prices

One of my goals with mjbots is to make building dynamic robots more accessible to researchers and enthusiasts everywhere. To make that more of a reality, I’m lowering the prices in a big way on the foundational components of brushless robotic systems, the moteus controller and qdd100 servo.

OldNew
moteus r4.3 controller$119$79
moteus r4.3 devkit$199$159
qdd100 beta$549$429
qdd100 beta devkit$599$469

Don’t worry, if you purchased any of these in the last month, you should be getting a coupon in your email equivalent to the difference.

Happy building!

Measuring the pi3hat r4.2 performance

Last time I covered the new software library that I wrote to help use all the features of the pi3hat, in an efficient manner. This time, I’ll cover how I measured the performance of the result, and talk about how it can be integrated into a robotic control system.

pi3hat r4.2 available at mjbots.com

Test Setup

To check out the timing, I wired up a pi3hat into the quad A1 and used the oscilloscope to probe one of the SPI clocks and CAN bus 1 and 3.

Then, I could use pi3hat_tool incantations to experiment with different bus utilization strategies and get to one with the best performance. The sequence that I settled on was:

  1. Write all outgoing CAN messages, using a round-robin strategy between CAN buses. The SPI bus rate of 10Mhz is faster than the 5Mbps maximum CAN-FD rate, so this gets each bus transmitting its first packet as soon as possible, then queues up the remainder.
  2. Read the IMU. During this phase, any replies over CAN are being enqueued on the individual STM32 processors.
  3. Optionally read CAN replies. If any outgoing packets were marked as expecting a reply, that bus is expected to receive the appropriate number of responses. Additionally, a bus can be requested to “get anything in the queue”.

With this approach, a full command and query of the comprehensive state of 12 qdd100 servos, and reading the IMU takes around 740us. If you perform that on one thread while performing robot control on others, it allows you to achieve a 1kHz update rate.

CAN1 SPI clock on bottom, CAN1 and CAN3 bus on top

These results were with the Raspberry Pi 3b+. On a Raspberry Pi 4, they seem to be about 5% better, mostly because the Pi 4’s faster CPU is able to execute the register twiddling a little faster, which reduces dead time on the SPI bus.

Bringing up the pi3hat r4.2

The pi3hat r4.2, now in the mjbots store, has only minor hardware changes from the r4 and r4.1 versions. What has changed in a bigger way is the firmware, and the software that is available to interface with it. The interface software for the previous versions was tightly coupled to the quad A1s overall codebase, that made it basically impossible to use with without significant rework. So, that rework is what I’ve done with the new libpi3hat library:

It consists of a single C++11 header and source file with no dependencies aside from the standard C++ library and bcm_host.h from the Raspberry Pi firmware. You can build it using the bazel build files, or just copy the source file into your own project and build with whatever system you are using.

Performance

Using all of the pi3hat’s features in a runtime performant way can be challenging, but libpi3hat makes it not so bad by providing an omnibus call which sequences accesses to all the CAN buses and peripherals in a way that maximizes pipelining and overlap between the different operations, while simultaneously maximizing the usage of the SPI bus. The downside is that it does not use the linux kernel drivers for SPI and thus requires root access to run. For most robotic applications, that isn’t a problem, as the controlling computer is doing nothing but control anyways.

This design makes it feasible to operate at least 12 servos and read the IMU at rates over 1kHz on a Raspberry Pi.

pi3hat_tool

There is a command line tool, pi3hat_tool which provides a demonstration of how to use all the features of the library, as well as being a useful diagnostic tool on its own. For instance, it can be used to read the IMU state:

# ./pi3hat_tool --read-att
ATT w=0.999 x=0.013 y=-0.006 z=-0.029  dps=(  0.1, -0.1, -0.1) a=( 0.0, 0.0, 0.0)

And it can be used to write and read from the various CAN buses.

# ./pi3hat_tool --write-can 1,8001,1300,r \
                --write-can 2,8004,1300,r \
                --write-can 3,8007,1300,r
CAN 1,100,2300000400
CAN 2,400,2300000400
CAN 3,700,230000fc00

You can also do those at the same time in a single bus cycle:

# ./pi3hat_tool --read-att --write-can 1,8001,1300,r
CAN 1,100,2300000400
ATT w=0.183 x=0.692 y=0.181 z=-0.674  dps=(  0.1, -0.0,  0.1) a=(-0.0, 0.0,-0.0)

Next steps

Next up I’ll demonstrate my performance testing setup, and what kind of performance you can expect in a typical system.

New product Monday: pi3hat

I’ve now got the last custom board from the quad A1 up in the mjbots store for sale, the mjbots pi3 hat for $129.

This board breaks out 4x 5Mbps CAN-FD ports, 1 low speed CAN port, a 1kHz IMU and a port for a nrf24l01. Despite its name, it works just fine with the Rasbperry Pi 4 in addition to the 3b+ I have tested with mostly to date. I also have a new user-space library for interfacing with it that I will document in some upcoming posts. That library makes it pretty easy to use in a variety of applications.

Finally, as is customary with these boards, I made a video “getting started” guide:

Raspberry Pi 4

Only 1 full year after it was released, I managed to get a Raspberry Pi 4 and test it out in the quad A1. I had been delaying doing so because of reports of thermal issues. The Pi 3B+ already ran a little hot and I didn’t want to have to add active cooling into the robot chassis to get it stable.

It looks like the Raspberry Pi engineers have been hard at work because the newer firmware releases have significantly reduced the overall power consumption and thus the thermal load. In my testing so far it only seems “a little” hotter than the 3b+.

The now somewhat misnamed “pi3hat” worked just fine with the pi4, with some minor changes to the software to support the new peripheral base address from the bcm2711 SoC in the Pi 4.

Yes, you can see the USB3 ports there

Balancing gait in 2D

After getting a gait which looked like it could balance across the leg support line in 1D, I needed to extend that to 2D and try it out on the robot.

Extension to 2D

Extending this to two dimensions wasn’t too bad. I just did a bunch of geometry to follow the path traced out by a given 2 dimensional velocity and rotation rate, intersected with a line segment:

Given this function, the logic to select a swing target is basically the same as in the 1 dimensional case. We now create two “virtual legs”, which consist of two feet ganged together and produce a single support line. At each time instant when all legs are in stance, we look at the time remaining until each of the virtual legs would cross the center of mass at the current velocity. As soon as one hits the half-swing point, we start a swing.

As part of this, I extended tplot2 to be able to render the target location of each swing.

Remaining niggles

Once I had an actual robot to test it out on, I found a few other minor problems. When selecting the target for a foot swing, the 1 dimensional case just used the current velocity to move the leg far enough past the pickup point. That doesn’t work out all that well in heavy acceleration though, resulting in very short stance times. What worked much better was to use the expected velocity at the end of the next stance window. That provided much more consistent stance spacings even when accelerating or decelerating.

I still haven’t come up with a great solution for turning in place, for which the entire concept of estimating distance to the balance point doesn’t make sense. Right now, I just rely on the maximum stance time to ensure the legs eventually step in that scenario and don’t extend beyond their physical limits. I’ll probably eventually add a “time to infeasible geometry” criteria to handle that.

Testing it out

While there is still a lot of remaining work, this increased the maximum possible speed of the machine by at least a factor of 2, and the feasible acceleration by a factor of 10 or so. In the video below, I’ve got some clips of the robot walking around at 0.5m/s. That’s not too fast, but is more than a body length per second which counts for something.

New product Monday: Amass XT30 connectors

Now that the mjbots.com store has qdd100 quasi direct drive servos, moteus controllers, and the new power dist board, it is time to start getting some useful accessories in stock. While each of these components comes with mating connectors, sometimes you need more or find that a cable harness you built previously needs to be scrapped. Availability of Amass connectors isn’t that great outside of the Chinese market, so I’ve now got XT30U male and female solder cup connectors up in packs of 10. Each pack is just $6.

Get them at: