I’ve been developing a new bi-directional spread spectrumradio to command and control the mjbots quad robot. Here I’ll describe my first integration of the protocol into the robot.
To complete that integration, I took the library I had designed for the nrfusb, and ported it to run on the auxiliary controller of the pi3 hat. This controller also controls the IMU and an auxiliary CAN-FD bus. It is connected to one of the SPI buses on the raspberry pi. Here, it was just a matter of exposing an appropriate SPI protocol that would allow the raspberry pi to receive and transmit packets.
Slightly unfortunately, this version of the pi3hat does not have interrupt lines for any of the stm32s. Thus, I created a multiplexed status register that the rpi can use to check which of the CAN, IMU, or RF has data pending. Then I slapped together a few registers which allowed configuring the ID and reading and writing slots and their priorities.
Then I refactored things around on the raspberry pi side so that one core would keep busy polling for one of those things to become available. So far, for the things which access SPI, I’ve been putting them locked to an isolcpu cpu to get improved SPI timing. Eventually, once I have interrupt lines, I might consolidate all of these down to a single core. That, plus defining an initial mapping between the controls and slots resulted in:
Finally, I created a very simple GL gui application which connects to an nrfusb and a joystick. It uses Dear ImGui to render a few widgets and glfw to window and read the joystick.
While I was at it, I finally updated my joystick UI to make gait selection a bit faster, and got the robot to do a better job of switching out of the walk gait. Thus the following video showing all of that hooked together.
To recap, what I needed was a reliable means of commanding the robot and receiving telemetry, even in congested radio environments. At competitions or events like Maker Faire, Robogames and such, the wireless environment is often totally trashed. Hundreds of devices are operating in close proximity, across all spectrum bands, including plenty of things that probably aren’t licensed to be transmitting in the first place. When we first built Super Mega Microbot, we used a custom protocol with a 5 GHz wifi transmitter as the physical layer and selected USB based dongles which allowed control over the physical layer. USB proved problematic, and with national RF regulations, it is extremely challenging to find wifi devices which provide that level of control at the RF layer. Also, even with full physical layer control, wifi is difficult to make work in a reliable manner as there is so much congestion in both the 2.4GHz and 5GHz bands and the channels are so wide.
What does usually work at these events, despite the extreme congestion, are standard hobby RC transmitters. DSMX from Spektrum, is one of the more popular varieties. It uses an off the shelf 2.4GHz RF IC, and then hops between frequencies every transmission based on a pseudorandom key shared between transmitter and receiver. This enables many transmitters to share the same RF environment, and renders them extremely resistant to interference.
The biggest problem with DSMX for this application (and basically every other RC protocol), is that they are unidirectional. At best, bidirectional solutions involve sticking two independent transmitter and receiver pairs in opposite directions. This is despite the fact that most of the low level RF ICs actually support bidirectional communications natively. Even then, the only supported telemetry forms are things specific to RC models. Joint position feedback would need to be encoded in propeller RPM for instance!
Proposed design goals
The target features I’m looking to achieve with this protocol are:
Resistant to heavy RF interference
Bidirectional communication of arbitrary data
50Hz update rate or higher
Multiple transmitters and receivers can operate in the same area without interfering or explicitly coordinating
Support data that is transmitted at different rates (i.e. voltage telemetry update can be low rate, but movement commands are high rate)
Control over source to add new features over time
Things I’m not necessarily trying to accomplish (yet):
A “binding” mode that uses RF to share the common psuedorandom key
A “stable” over the air protocol
Next up, a design which hopefully achieves these goals!
Thankfully, I’m now at the point where I’m fixing actual dynamics problems on the robot. Doubly thankfully I have a robot which is pretty robust and keeps working! That said, it is still, shall we say, “non-ideal”, to be testing code for the first time ever on a real robot.
Back with my HerkuleX based Super Mega MicroBot, I had a working DART based simulation which was decently accurate. However, the actuators for that machine were so limited that it didn’t really make sense to do any work in simulation. The only way to be effective with that machine was to tweak and tweak on the real platform and rely on exactly the right amount of bouncing and wiggling that would get it moving smoothly.
Now that I can accurately control force at 400Hz and beyond, that isn’t a problem anymore, so I’m working to resurrect the bitrotted simulator. In the end though, it turned out to be a complete re-write as basically nothing of the original made sense to use.
Here’s a video of the very first time it moved around in sim (which means there are still many problems left!)
Thus I spun a new revision r3, basically just to fix all the blue wires so that I could have some spares without having to worry about the robustness of my hot glue. While I was at it, I updated the logo:
As seems to be the way of things, a few days after I sent this board off to be manufactured, I realized that the CAN port needed to actually be isolated, since when the switches are off, the ground is disconnected from the rest of the system. Sigh. Guess that will wait for r4.
In order to bring up the final piece of the raspberry pi 3 hat, the nrf24l01+, I wanted a desktop development platform that would allow for system bringup and also be useful as a PC side transmitter. Thus, the nrfusb:
Similar to the fdcanusb, it is just an STM32G474 on the USB bus, although this has a pin header for a common nrf24l01+ form factor daughterboard.
The next steps here are to get this working at all, then implement a spread spectrum bidirectional protocol for control and telemetry.
This gait is basically the same thing as I ran on the quad A0 in principle. The opposing feet are picked up according to a rigid schedule, and moved to a point opposite their “idle” position based on the current movement speed. Any feet that are completely placed on the ground just move with the inverse of the robot’s velocity.
What differs now is that the leg positions and forces are controlled in 3D at a high rate, 400Hz for now. At each time step, the position and velocity of all 12 joints is measured. The gait algorithm calculates a desired 3D position, velocity, and force. Feedforward force is currently only used to control the weight supporting legs. Then, those 3D parameters are transformed into a joint position, velocity, and force based on the current joint position, and the command is sent out.
While not conceptually too different, just controlling the system in 3D at a high rate gives significantly improved results for a range of walking parameters. There is still a lot left to do, but it is a good start!
After getting all the legs swapped out, I ran my existing software to validate that all the pieces worked together. Here’s a quick video showing basically what I’ve shown before, but with all new hardware:
After I initially assembled the new legs onto the chassis, I realized I had the geometry slightly off and there was some interference through part of the shoulder rotation. I made up new printed parts and replaced everything in front of the camera. Thus, watch some high speed robot surgery:
The quad A1’s first job is to validate the new moteus controller in the quadrupedal configuration, after which I’ll use it as the primary development platform to get all my gait work done.
Now that the IMU is functioning, my next step is to use that to produce an attitude estimate. Here, I dusted off my unscented Kalman filter based estimator from long ago, and adapted it slightly to run on an STM32. As before, I used a UKF instead of the more traditional EKF not because of its superior filtering performance, but because of the flexibility it allows with the process and measurement functions. Unlike the EKF, the UKF is purely numerical, so no derivation of Jacobians is necessary. It turns out that even an STM32 has plenty of processing power to do this for things like a 7 state attitude filter.
One problem I encountered, was by default I have been building everything for the STM32 with the “-Os” optimization level. Unfortunately, with Eigen linear algebra routines, that is roughly 4x slower than “-O3”. Doubly unfortunate, just using copts at the rule level or --copts on the command line didn’t work. bazel doesn’t let you control the order of command line arguments very well, and the -Os always ended up *after* any of the additional arguments I tried to use to override. To get it to work, I had to navigate some bazel toolchain mysteries in rules_mbed in order to allow build rules to specify if they optionally want the higher optimization instead of optimizing for size. I’m pretty sure this is not exactly what the with_features mechanism in toolchain’s feature rule is for, but it let me create a feature called speedopt which turns on -O3 and turns off -Os. The final result is at rules_mbed/530fae6d8
To date, I’ve only done some very zeroth order performance optimization. I spent 15 minutes parameter tuning, making sure that the covariances updated to approximately the correct levels and I added a simple filter to reject accelerometer updates during dynamic motion. I did just enough runtime performance to get an update down to around 300us, which is just fine for a filter intended to run at 1kHz. More will remain as future work.
Here’s a plot from a quick sanity check, where I manually rolled the device in alternating directions, then pitched it in alternating directions. (When pitching, it was on a somewhat springy surface, thus the ringing).
The pitch and roll are plenty smooth, although they look to perhaps not returning exactly to their original position. At some point, I will do a more detailed qualification to dial in the performance.
After getting the power to work, the next step in bringing up the new quad’s raspberry pi interface board is getting the FDCAN ports to work. As described in my last roadmap, this board has multiple independent FDCAN buses. There are 2 STM32G4’s each with 2 FDCAN buses so that every leg gets a separate bus. There is a 5th auxiliary bus for any other peripherals driven from a third STM32G4. All 3 of the STM32G4’s communicate with the raspberry pi as SPI slaves.
Making this work was straightforward, if tedious. I designed a simple SPI based protocol that would allow transmission and receipt of FD-CAN frames at a high rate in a relatively efficient manner, then implemented that on the STM32s. On the raspberry pi side I initially used the linux kernel driver, but found that it didn’t give sufficient control over hold times during the transmission. Since the SPI slave is implemented in software, I needed to leave sufficient time after asserting the chip select and after transmitting the address bytes. The kernel driver gives no control over this at all, so I resorted to directly manipulating the BCM2837s peripheral registers and busy loop waiting in a real time thread.
After a decent supply of bugs were squashed, I got to a point where the host could send off 12 queries to all the servos with the four buses all being used simultaneously, then collating the responses back. I haven’t spent much time optimizing the cycle time, but the initial go around is at around 1.0ms for a query of all 12 devices which is about 1/3 of the 3.5ms I had in the previous single-bus RS485 version.
Here’s a scope trace of a full query cycle with 2 of the 4 CAN buses on the top, and the two chip selects on the bottom. Woohoo!