I’ve been developing a new bi-directional spread spectrum radio to command and control the mjbots quad robot. Here I’ll describe my first integration of the protocol into the robot.
To complete that integration, I took the library I had designed for the nrfusb, and ported it to run on the auxiliary controller of the pi3 hat. This controller also controls the IMU and an auxiliary CAN-FD bus. It is connected to one of the SPI buses on the raspberry pi. Here, it was just a matter of exposing an appropriate SPI protocol that would allow the raspberry pi to receive and transmit packets.
Slightly unfortunately, this version of the pi3hat does not have interrupt lines for any of the stm32s. Thus, I created a multiplexed status register that the rpi can use to check which of the CAN, IMU, or RF has data pending. Then I slapped together a few registers which allowed configuring the ID and reading and writing slots and their priorities.
Then I refactored things around on the raspberry pi side so that one core would keep busy polling for one of those things to become available. So far, for the things which access SPI, I’ve been putting them locked to an isolcpu cpu to get improved SPI timing. Eventually, once I have interrupt lines, I might consolidate all of these down to a single core. That, plus defining an initial mapping between the controls and slots resulted in:
Finally, I created a very simple GL gui application which connects to an nrfusb and a joystick. It uses Dear ImGui to render a few widgets and glfw to window and read the joystick.
While I was at it, I finally updated my joystick UI to make gait selection a bit faster, and got the robot to do a better job of switching out of the walk gait. Thus the following video showing all of that hooked together.
Thankfully, I’m now at the point where I’m fixing actual dynamics problems on the robot. Doubly thankfully I have a robot which is pretty robust and keeps working! That said, it is still, shall we say, “non-ideal”, to be testing code for the first time ever on a real robot.
Back with my HerkuleX based Super Mega MicroBot, I had a working DART based simulation which was decently accurate. However, the actuators for that machine were so limited that it didn’t really make sense to do any work in simulation. The only way to be effective with that machine was to tweak and tweak on the real platform and rely on exactly the right amount of bouncing and wiggling that would get it moving smoothly.
Now that I can accurately control force at 400Hz and beyond, that isn’t a problem anymore, so I’m working to resurrect the bitrotted simulator. In the end though, it turned out to be a complete re-write as basically nothing of the original made sense to use.
Here’s a video of the very first time it moved around in sim (which means there are still many problems left!)
While I was able to make the r2 power distribution board work, it did require quite a bit more than my usual number of blue wires and careful trace cutting.
Thus I spun a new revision r3, basically just to fix all the blue wires so that I could have some spares without having to worry about the robustness of my hot glue. While I was at it, I updated the logo:
As seems to be the way of things, a few days after I sent this board off to be manufactured, I realized that the CAN port needed to actually be isolated, since when the switches are off, the ground is disconnected from the rest of the system. Sigh. Guess that will wait for r4.
Here is r3 all wired up into the chassis:
In order to bring up the final piece of the raspberry pi 3 hat, the nrf24l01+, I wanted a desktop development platform that would allow for system bringup and also be useful as a PC side transmitter. Thus, the nrfusb:
Similar to the fdcanusb, it is just an STM32G474 on the USB bus, although this has a pin header for a common nrf24l01+ form factor daughterboard.
The next steps here are to get this working at all, then implement a spread spectrum bidirectional protocol for control and telemetry.
After I restructured my control laws to take advantage of high rate force feedback for the pronking experiments, I haven’t actually managed to port the walking gait yet. Now that I have a brand new robot, it seemed like a good time!
This gait is basically the same thing as I ran on the quad A0 in principle. The opposing feet are picked up according to a rigid schedule, and moved to a point opposite their “idle” position based on the current movement speed. Any feet that are completely placed on the ground just move with the inverse of the robot’s velocity.
What differs now is that the leg positions and forces are controlled in 3D at a high rate, 400Hz for now. At each time step, the position and velocity of all 12 joints is measured. The gait algorithm calculates a desired 3D position, velocity, and force. Feedforward force is currently only used to control the weight supporting legs. Then, those 3D parameters are transformed into a joint position, velocity, and force based on the current joint position, and the command is sent out.
While not conceptually too different, just controlling the system in 3D at a high rate gives significantly improved results for a range of walking parameters. There is still a lot left to do, but it is a good start!
(also, a limited number of the qdd100 servos are now open for sale to beta testers at shop.mjbots.com, https://shop.mjbots.com/product/qdd100-beta-developer-kit/)
After getting all the legs swapped out, I ran my existing software to validate that all the pieces worked together. Here’s a quick video showing basically what I’ve shown before, but with all new hardware:
Now that the IMU is functioning, my next step is to use that to produce an attitude estimate. Here, I dusted off my unscented Kalman filter based estimator from long ago, and adapted it slightly to run on an STM32. As before, I used a UKF instead of the more traditional EKF not because of its superior filtering performance, but because of the flexibility it allows with the process and measurement functions. Unlike the EKF, the UKF is purely numerical, so no derivation of Jacobians is necessary. It turns out that even an STM32 has plenty of processing power to do this for things like a 7 state attitude filter.
One problem I encountered, was by default I have been building everything for the STM32 with the “-Os” optimization level. Unfortunately, with Eigen linear algebra routines, that is roughly 4x slower than “-O3”. Doubly unfortunate, just using
copts at the rule level or
--copts on the command line didn’t work. bazel doesn’t let you control the order of command line arguments very well, and the
-Os always ended up *after* any of the additional arguments I tried to use to override. To get it to work, I had to navigate some bazel toolchain mysteries in rules_mbed in order to allow build rules to specify if they optionally want the higher optimization instead of optimizing for size. I’m pretty sure this is not exactly what the
with_features mechanism in toolchain’s
feature rule is for, but it let me create a feature called
speedopt which turns on
-O3 and turns off
-Os. The final result is at rules_mbed/530fae6d8
To date, I’ve only done some very zeroth order performance optimization. I spent 15 minutes parameter tuning, making sure that the covariances updated to approximately the correct levels and I added a simple filter to reject accelerometer updates during dynamic motion. I did just enough runtime performance to get an update down to around 300us, which is just fine for a filter intended to run at 1kHz. More will remain as future work.
Here’s a plot from a quick sanity check, where I manually rolled the device in alternating directions, then pitched it in alternating directions. (When pitching, it was on a somewhat springy surface, thus the ringing).
The pitch and roll are plenty smooth, although they look to perhaps not returning exactly to their original position. At some point, I will do a more detailed qualification to dial in the performance.