Tag Archives: imu

Balancing on estimated terrain

Last time, I described my approach for estimating the terrain under the robot based on the inertial measurement unit and proprioceptive foot feedback. Now, I’ll cover how that is used to balance.

“R” Frame

First, let me explain the “R” or “robot” frame and how it is used. The frames I’ve discussed in this series so far are the “B” frame, which is rigidly attached to the center of the robot body, the “M” frame, which is located at the center of mass and level with the ground, and the “T” frame, which is under the robot and level with the current terrain.

The “R” frame, by contrast, is a purely invented frame that is a rigid transform away from the “B” frame. Its purpose is to allow for (1) the cool looking inverse kinematic demos that everyone seems so fond of, and (2) mostly global transforms, like this implementation of terrain based balancing. All of the gait algorithms operate almost exclusively in the R frame, which means that offsets and rotations applied there will affect the balance of the robot during its normal operation.

Using the R frame to balance

Here the algorithm is relatively straightforward. The center of the T frame is taken, transformed into the M frame and then moved up by the current average leg height. In 2D, that looks like:

Then, the point p_0 as measured in the B frame gives the desired RB transform offset. It is that simple! That formulation keeps the center of mass over the 0, 0 R frame point accounting for offsets in the center of mass and for non-level terrain.

Simulation results

Here’s the robot walking up a relatively steep slope in simulation using the above technique. The purple disc shows the estimated terrain value, while the gray disc shows the gravity normal plane.

Final steps

In the final post for this work, we’ll test it on the real robot!

Estimating terrain slope

Last time I discussed the challenges when operating the mjbots quad A1 on sloped surfaces. While there are a number of possible means of tackling this, the approach I’ve gone with for now is to estimate the slope of the terrain under the robot, and use that to determine how to position the center of mass. Here’ll I’ll cover the estimation part of this solution.

On paper, the quad A1 has plenty of information to estimate the terrain under its feet. Between the IMU with attitude estimator, the proprioceptive feedback from the joints, and the ability to move the feet around, it would be obvious to a human whether the ground under them was sloped or level. The challenge here is to devise an algorithm to do so, despite the noise in the IMU, the fact that the feet are not always on the ground, and that as the robot moves, the terrain under it changes.

Approach

My basic approach can be summarized in the follow flow chart / block diagram:

First, a brief description of the 3 pertinent reference frames:

  • B Frame: The body frame (or B frame), is centered on the robot body, and rigidly fixed to the robot body. The proprioceptive system eventually calculates each of the 4 feet positions in this frame.
  • M Frame: The CoM frame (or M frame), is centered at the robot’s idle center of mass and oriented such that positive Z points along gravity toward the ground with a heading that is arbitrary at start up, but that tracks the robot’s changing heading.
  • T Frame: The terrain frame (or T frame), is referenced to the M frame at the average height of the legs with a slope that aligns with the average slope of the terrain under the robot.

The algorithm works in roughly the following steps:

  1. First, project all the feet positions into the M frame.
  2. For any in-flight legs, reset the Z value to one calculated from the current TM transform and a 0 T frame Z height.
  3. Fit a plane to these “on-ground” M frame points.
  4. Update the slope of the T frame using this plane with an exponential filter along the X and Y axes.

Properties

This algorithm has the benefit that it will converge on the terrain underneath the robot as long as feet touch the ground with regularity, which is a somewhat necessary condition for a robot supported by its legs. The rate at which the estimate converges can be controlled by the filter constant. Selecting that to be the same order as the step frequency does a decent job of rejecting spurious noise while responding in a timely manner to updated terrain.

Next up we’ll see how to use this information to balance, and watch the results in simulation.

Measuring the pi3hat r4.2 performance

Last time I covered the new software library that I wrote to help use all the features of the pi3hat, in an efficient manner. This time, I’ll cover how I measured the performance of the result, and talk about how it can be integrated into a robotic control system.

pi3hat r4.2 available at mjbots.com

Test Setup

To check out the timing, I wired up a pi3hat into the quad A1 and used the oscilloscope to probe one of the SPI clocks and CAN bus 1 and 3.

Then, I could use pi3hat_tool incantations to experiment with different bus utilization strategies and get to one with the best performance. The sequence that I settled on was:

  1. Write all outgoing CAN messages, using a round-robin strategy between CAN buses. The SPI bus rate of 10Mhz is faster than the 5Mbps maximum CAN-FD rate, so this gets each bus transmitting its first packet as soon as possible, then queues up the remainder.
  2. Read the IMU. During this phase, any replies over CAN are being enqueued on the individual STM32 processors.
  3. Optionally read CAN replies. If any outgoing packets were marked as expecting a reply, that bus is expected to receive the appropriate number of responses. Additionally, a bus can be requested to “get anything in the queue”.

With this approach, a full command and query of the comprehensive state of 12 qdd100 servos, and reading the IMU takes around 740us. If you perform that on one thread while performing robot control on others, it allows you to achieve a 1kHz update rate.

CAN1 SPI clock on bottom, CAN1 and CAN3 bus on top

These results were with the Raspberry Pi 3b+. On a Raspberry Pi 4, they seem to be about 5% better, mostly because the Pi 4’s faster CPU is able to execute the register twiddling a little faster, which reduces dead time on the SPI bus.

Bringing up the pi3hat r4.2

The pi3hat r4.2, now in the mjbots store, has only minor hardware changes from the r4 and r4.1 versions. What has changed in a bigger way is the firmware, and the software that is available to interface with it. The interface software for the previous versions was tightly coupled to the quad A1s overall codebase, that made it basically impossible to use with without significant rework. So, that rework is what I’ve done with the new libpi3hat library:

It consists of a single C++11 header and source file with no dependencies aside from the standard C++ library and bcm_host.h from the Raspberry Pi firmware. You can build it using the bazel build files, or just copy the source file into your own project and build with whatever system you are using.

Performance

Using all of the pi3hat’s features in a runtime performant way can be challenging, but libpi3hat makes it not so bad by providing an omnibus call which sequences accesses to all the CAN buses and peripherals in a way that maximizes pipelining and overlap between the different operations, while simultaneously maximizing the usage of the SPI bus. The downside is that it does not use the linux kernel drivers for SPI and thus requires root access to run. For most robotic applications, that isn’t a problem, as the controlling computer is doing nothing but control anyways.

This design makes it feasible to operate at least 12 servos and read the IMU at rates over 1kHz on a Raspberry Pi.

pi3hat_tool

There is a command line tool, pi3hat_tool which provides a demonstration of how to use all the features of the library, as well as being a useful diagnostic tool on its own. For instance, it can be used to read the IMU state:

# ./pi3hat_tool --read-att
ATT w=0.999 x=0.013 y=-0.006 z=-0.029  dps=(  0.1, -0.1, -0.1) a=( 0.0, 0.0, 0.0)

And it can be used to write and read from the various CAN buses.

# ./pi3hat_tool --write-can 1,8001,1300,r \
                --write-can 2,8004,1300,r \
                --write-can 3,8007,1300,r
CAN 1,100,2300000400
CAN 2,400,2300000400
CAN 3,700,230000fc00

You can also do those at the same time in a single bus cycle:

# ./pi3hat_tool --read-att --write-can 1,8001,1300,r
CAN 1,100,2300000400
ATT w=0.183 x=0.692 y=0.181 z=-0.674  dps=(  0.1, -0.0,  0.1) a=(-0.0, 0.0,-0.0)

Next steps

Next up I’ll demonstrate my performance testing setup, and what kind of performance you can expect in a typical system.

New product Monday: pi3hat

I’ve now got the last custom board from the quad A1 up in the mjbots store for sale, the mjbots pi3 hat for $129.

This board breaks out 4x 5Mbps CAN-FD ports, 1 low speed CAN port, a 1kHz IMU and a port for a nrf24l01. Despite its name, it works just fine with the Rasbperry Pi 4 in addition to the 3b+ I have tested with mostly to date. I also have a new user-space library for interfacing with it that I will document in some upcoming posts. That library makes it pretty easy to use in a variety of applications.

Finally, as is customary with these boards, I made a video “getting started” guide:

Turret active inertial stabilization

This post will be short, because it is just re-implementing the functionality I had in my turrets version 1 and 2, but this time using the raspberry pi as the master controller and two moteus controllers on each gimbal axis.

I have the raspberry pi running the primary control loop at 400Hz.  At each time step it reads the IMU from the pi3 hat, and reads the current state of each servo (although it doesn’t actually use the servo state at the moment).  It then runs a simple PID control loop on each axis, aiming to achieve a desired position and rate, which results in a torque command that is sent to each servo.  Here’s the video proof!

New Mech Warfare turret

Another of the tasks I’ve set for myself with regards to future Mech Warfare competitions is redesigning the turret.  The previous turret I built had some novel technical features, such as active inertial gimbal stabilization and automatic optical target tracking, however it had some problems too.  The biggest one for my purposes now, was that it still used the old RS485 based protocol and not the new CAN-FD based one.  Second, the turret had some dynamic stability and rigidity issues.  The magazine consisted of an aluminum tube sticking out of the top which made the entire thing very top heavy.  The 3d printed fork is the same I one I had made at Shapeways 5 years ago.  It is amazingly flexible in the lateral direction, which results in a lot of undesired oscillation if the base platform isn’t perfectly stable.  I’ve learned a lot about 3d printing and mechanical design in the meantime (but of course still have a seemingly infinite amount more to learn!) and think I can do better.  Finally, cable management between the top and bottom was always challenging.  You want to have a large range of motion, but keeping power and data flowing between the two rotating sections was never easy.

dsc_0529
The legacy turret

My concept with this redesign is twofold, first make the turret be basically an entirely separate robot with no wires connecting it to the main robot and second, try to use as many of the components from the quad A1 as I could to demonstrate their, well, flexibility.  Thus, this turret will have a separate battery, power distribution board, raspberry pi, pi3 hat, and a moteus controller for each axis of motion.  These are certainly overkill, but hey, the quad A1 can carry a lot of weight.

The unique bits will be a standalone FPV camera, another camera attached to the raspberry PI for target tracking, a targeting laser, and the AEG mechanism, including a new board to manage the firing and loading functions.

20200423-turret-angle-view
A static rendering

And here’s a quick spin around video:

More to come…

Attitude estimation for pi3 hat

Now that the IMU is functioning, my next step is to use that to produce an attitude estimate.  Here, I dusted off my unscented Kalman filter based estimator from long ago, and adapted it slightly to run on an STM32.  As before, I used a UKF instead of the more traditional EKF not because of its superior filtering performance, but because of the flexibility it allows with the process and measurement functions.  Unlike the EKF, the UKF is purely numerical, so no derivation of Jacobians is necessary.  It turns out that even an STM32 has plenty of processing power to do this for things like a 7 state attitude filter.

One problem I encountered, was by default I have been building everything for the STM32 with the “-Os” optimization level.  Unfortunately, with Eigen linear algebra routines, that is roughly 4x slower than “-O3”.  Doubly unfortunate, just using copts at the rule level or --copts on the command line didn’t work.  bazel doesn’t let you control the order of command line arguments very well, and the -Os always ended up *after* any of the additional arguments I tried to use to override.  To get it to work, I had to navigate some bazel toolchain mysteries in rules_mbed in order to allow build rules to specify if they optionally want the higher optimization instead of optimizing for size.  I’m pretty sure this is not exactly what the with_features mechanism in toolchain’s feature rule is for, but it let me create a feature called speedopt which turns on -O3 and turns off -Os.  The final result is at rules_mbed/530fae6d8

To date, I’ve only done some very zeroth order performance optimization.  I spent 15 minutes parameter tuning, making sure that the covariances updated to approximately the correct levels and  I added a simple filter to reject accelerometer updates during dynamic motion.  I did just enough runtime performance to get an update down to around 300us, which is just fine for a filter intended to run at 1kHz.  More will remain as future work.

Here’s a plot from a quick sanity check, where I manually rolled the device in alternating directions, then pitched it in alternating directions.  (When pitching, it was on a somewhat springy surface, thus the ringing).

20200305-attitude-validation

The pitch and roll are plenty smooth, although they look to perhaps not returning exactly to their original position.  At some point, I will do a more detailed qualification to dial in the performance.

 

Bringing up the IMU on the pi3 hat

The next peripheral to get working on the quad’s raspberry pi interface board is the IMU. When operating, the IMU will primarily be used to determine attitude and angular pitch and roll rates.  Secondarily, it will determine yaw rate, although there is no provision within the IMU to determine absolute yaw.

To accomplish this, the board has a BMI088 6 axis accelerometer and gyroscope attached via SPI to the auxiliary STM32G4 along with discrete connections for interrupts.  This chip has 16 bit resolution for both sensors, decent claimed noise characteristics, and supposedly the ability to better reject high frequency vibrations as seen in robotic applications.  I am currently running the gyroscope at 1kHz, and the accelerometer at 800Hz.  The IMU is driven off the gyroscope, with the accelerometer sampled whenever the gyroscope has new data available.

My first step was just to read out the 6 axis values at full rate to measure the static performance characteristics.  After doing that overnight, I got the following Allan Variance plot.

20200304-bmi088-allan-variance

That gives the angular random walk at around 0.016 dps / sqrt(Hz) with a bias stability of around 6.5 deg/hr.  The angular random walk is about what is specified in the datasheet, and the bias is not specified at all, but this seems really good for a MEMS sensor.  In fact, it is good enough I could probably just barely gyrocompass, measuring the earth’s rotation, with a little patience.  The accelerometer values are shown there too, and seem fine, but aren’t all that critical.

Next up is turning this data into an attitude and rate estimate.

Functional gimbal stabilized Mech Warfare turret

Well, that took longer than I expected! I last showed some progress on a gimbal stabilized turret for Mech Warfare competitions more than six months ago. Due to some unexpected technical difficulties, it took much longer to complete than I had hoped, but the (close to) end result is here!

20160206-turret-overview
Complete gimbal mounted turret

Here’s a quick feature list:

  • 2 axis control: Yaw and pitch are independently actuated.
  • Brushless: Each axis is driven by a brushless gimbal motor for high bandwidth no-backlash stabilization.
  • Absolute encoders: Each axis has an absolute magnetic encoder so that accurate force control can be applied to each gimbal, even at zero speed.
  • Fire control: High current outputs for driving an AEG motor, an agitator motor, and a low current output for a targetting laser are present.
  • 7v-12V input: Supports 2S-3S lipo power sources.
  • 12V boost: When running from 2S lipo, it can boost the gimbal drive up to 12V for additional stabilization authority.
  • HerkuleX protocol: The primary control interface uses a native Dongbu HerkuleX protocol; support for other UART based protocols which will work at 3.3V CMOS levels should be easy.
  • USB debugging support: A USB port is present to return high rate debugging information and allow configuration and diagnostics to be performed.
  • Open source: All design and firmware files are Apache 2.0 licensed on github: https://github.com/mjbots/mjmech/tree/master/hw/gimbal.

You can see the turret’s basic operations in a quick video here:

 

 

Design

The design is driven by the bill of materials selection. The primary components of the gimbal are as follows:

  • Turnigy HD 3508 Gimbal Motor: Both axes use this gimbal motor from HobbyKing, which has sufficient power to stabilize a 600g turret.
  • Frame: The mechanical frame is a shapeways strong-and-flexible printed part.
  • STM32F411: A fast 32 bit microcontroller with support for all the peripherals that are necessary.
  • TPS62172: The primary 3.3V regulator which powers the microcontroller and all the other 3.3V parts.
  • TPS55330: The 12V boost regulator, which when enabled, powers the gimbal motors.
  • MC33926: A 2 channel motor driver used for fire control, it powers both the AEG and agitator motor outputs.
  • DRV8313: 2 of these integrated BLDC drivers power each gimbal motor.
  • AS5048A/B: These absolute magnetic encoders are used to measure the actual position of the pitch and yaw gimbals.
  • BMI160: This IMU is used as the primary source of inertial compensation data. The board hardware supports a second IMU, to be placed on the main robot, but the firmware does not yet support that configuration.

Boards

This gimbal design contains three custom boards, a breakout board for the BMI160 IMU, a breakout board for the AS5048B magnetic encoder sensor, and the primary board which contains the rest of the logic.

BMI 160 Breakout

20160206-bmi160-breakout.jpg
Completed BMI160 breakout board, Assembled by MacroFab

The first board is simple; it is a basically just a breakout board for the BMI160 inertial sensor. It provides the BMI160 itself, some decoupling capacitors, and a 0.1 inch 4 pin connector for the I2C bus.

I had these prototypes made at MacroFab which I highly recommend as a great provider of low-cost turnkey PCB assembly.

AS5048B Breakout

20160206-as5048b-breakout-small
AS5048B breakout board

 

This, like the BMI160 breakout board, just has decoupling capacitors, the chip itself, and connectors. It additionally has mounting holes designed to fit onto the 3508 gimbal motor. This was printed at OSH Park and hand-assembled.

Gimbal control board

20160206-gimbal-control.jpg
Completed primary gimbal control board (r2) , Assembled by MacroFab

The primary gimbal control board contains most of the system functionality. It is designed to mechanically mount directly above the yaw gimbal motor, as the yaw absolute magnetic encoder is in the center on the underside of the board.

This prototype was also built at MacroFab, who did an excellent job with this much more complex assembly.

The connectors and features are as follows:

  • Power and Data: A 4 pin JST-XH connector in the upper right brings in power and data from the main robot.
  • Debug USB: A debugging protocol is available on this micro-USB port.
  • Camera USB: Two 4 pin JST-PH connectors provide a convenience path for the camera USB. The turret’s camera connects to the top connector, and the main robot connects to the side facing connector.
  • I2C peripherals: 3, 4 pin JST-ZH connectors have identical pinout and connect to external I2C peripherals. These are used for the primary IMU, the pitch absolute magnetic encoder, and the optional secondary IMU.
  • Arming switch: This switch is connected directly to the enable pin on the MC33926, and is also connected to an input on the STM32F411.
  • Programming connector: The 6 pin JST-PH connector has the same pinout as Benjamin Vedder’s VESC board, and can program and debug the STM32F411.
  • Weapon connector: A 2×4 0.1 inch pin header has power lines for the AEG drive, the agitator drive and the laser. It has an extra row of pins so that a blank can be used for indexing.
  • Gimbal connectors: 2, 3 pin 0.1 inch connectors power the yaw and pitch gimbal brushless motors.

Firmware

struct Config {
  uint8_t address = 0xd0;
  uint16_t rate_hz = 800;
  uint16_t gyro_max_dps = 1000;
  uint8_t accel_max_g = 4;

  Euler offset_deg;

  template 
  void Serialize(Archive* a) {
    a->Visit(MJ_NVP(address));
    a->Visit(MJ_NVP(rate_hz));
    a->Visit(MJ_NVP(gyro_max_dps));
    a->Visit(MJ_NVP(accel_max_g));
    a->Visit(MJ_NVP(offset_deg));
  }

  Config() {
    offset_deg.yaw = 90.0f;
  }
};

Sample configuration structure

The firmware was an experiment in writing modern C++11 code for the bare-metal STM32 platform. Each module interacts with others through std::function like callbacks, and the entire system is compiled both for the target, and the host so that unit tests are run. Dynamic memory allocation is this close to being disabled, but it was necessary for newlib’s floating point number formatting routines, which just allocate a chunk of memory the first time you use them. Otherwise, there is no dynamic memory used at all.

It relies on a CubeMX project template for this board. Most of the libraries CubeMX provides have too little flexilibity to be used for this application, so much of the bit twiddling is re-implemented in the gimbal firmware. CubeMX is great for configuring the clock tree and pin alternate functions however, especially in a complex project like this.

Both configuration and telemetry rely on a templated C++ visitor pattern to perform compile time reflection over mostly arbitrary C++ structures. Any module can register a structure to be used for persistent configuration. Those structures can be changed through the debugging protocol, and can be written to and read from flash at runtime. Each module can also register as many telemetry structures as necessary. These can be emitted over the debugging protocol either at fixed intervals, or whenever they update.

IMU stabilization

The IMU is converted into attitude through use of a simple complementary filter, in the same spirit as some of Seb Madgwick’s algorithms. This is then fed into a control loop for each axis’s gimbal.

There are three possible modes, the first of which is what I call “open-loop”, and is based on the same principles as the BruGi brushless gimbal, where no absolute motor feedback is available. In that mode, a PID controller operates with the axis error as the input, and the output is the actual phase position of the BLDC controller. In this mode, the integral term does most of the work in stabilization, so the overall performance isn’t great.

The second mode still uses a PID controller, but now the output is an offset to the BLDC phase necessary to hold the current position as measured by the absolute encoders. This effectively makes the output a direct mapping to force applied to the motor, although of course a non-linear mapping. This mode results in much better overall performance and is easier to tune.

Finally, there is a third debugging mode that lets you just hard command specific BLDC phases. This is useful for calibrating the mapping between BLDC phase and absolute encoder phase.

tview

The debugging protocol is partially human readable, but telemetry data is encoded in the same binary format as used elsewhere in the mjmech codebase. tview is the debugging application we use to read that data, as well as configure and control the overall system.

20160206-tview-window.png
tview window

The bottom pane just has a serial console, where you can send arbitrary things over the virtual serial port. tview directly supports relatively few commands from the debugging protocol, and for instance has no UI to operate the stabilizer or fire control, so for now these are done by hand in that window.

The left pane has two tabs, one with a configuration tree and the other with a telemetry tree. The configuration tree shows all structures which were registered as configurable, and allows you to change them in the live system. The telemetry tree shows all structures registered as telemetry structures, and reports their values live as the system is operating.

The right pane has a live plot window where any of the values in the telemetry tree can be plotted versus time. It is just an embedded matplotlib plot, so all the normal plot interaction tools are available, plus some from mjmech’s tplot, like the ability to pan and zoom the left and right axes independently.

System video

And last but not least, here is a short video demonstrating the turret stabilizing a camera and firing some blanks at a target as our mech walks around.