I saw a recent Skyentific video and decided to have a try at it myself, check out the result:
Last time I covered the new software library that I wrote to help use all the features of the pi3hat, in an efficient manner. This time, I’ll cover how I measured the performance of the result, and talk about how it can be integrated into a robotic control system.
To check out the timing, I wired up a pi3hat into the quad A1 and used the oscilloscope to probe one of the SPI clocks and CAN bus 1 and 3.
Then, I could use
pi3hat_tool incantations to experiment with different bus utilization strategies and get to one with the best performance. The sequence that I settled on was:
- Write all outgoing CAN messages, using a round-robin strategy between CAN buses. The SPI bus rate of 10Mhz is faster than the 5Mbps maximum CAN-FD rate, so this gets each bus transmitting its first packet as soon as possible, then queues up the remainder.
- Read the IMU. During this phase, any replies over CAN are being enqueued on the individual STM32 processors.
- Optionally read CAN replies. If any outgoing packets were marked as expecting a reply, that bus is expected to receive the appropriate number of responses. Additionally, a bus can be requested to “get anything in the queue”.
With this approach, a full command and query of the comprehensive state of 12 qdd100 servos, and reading the IMU takes around 740us. If you perform that on one thread while performing robot control on others, it allows you to achieve a 1kHz update rate.
These results were with the Raspberry Pi 3b+. On a Raspberry Pi 4, they seem to be about 5% better, mostly because the Pi 4’s faster CPU is able to execute the register twiddling a little faster, which reduces dead time on the SPI bus.
The pi3hat r4.2, now in the mjbots store, has only minor hardware changes from the r4 and r4.1 versions. What has changed in a bigger way is the firmware, and the software that is available to interface with it. The interface software for the previous versions was tightly coupled to the quad A1s overall codebase, that made it basically impossible to use with without significant rework. So, that rework is what I’ve done with the new libpi3hat library:
It consists of a single C++11 header and source file with no dependencies aside from the standard C++ library and
bcm_host.h from the Raspberry Pi firmware. You can build it using the bazel build files, or just copy the source file into your own project and build with whatever system you are using.
Using all of the pi3hat’s features in a runtime performant way can be challenging, but libpi3hat makes it not so bad by providing an omnibus call which sequences accesses to all the CAN buses and peripherals in a way that maximizes pipelining and overlap between the different operations, while simultaneously maximizing the usage of the SPI bus. The downside is that it does not use the linux kernel drivers for SPI and thus requires root access to run. For most robotic applications, that isn’t a problem, as the controlling computer is doing nothing but control anyways.
This design makes it feasible to operate at least 12 servos and read the IMU at rates over 1kHz on a Raspberry Pi.
There is a command line tool,
pi3hat_tool which provides a demonstration of how to use all the features of the library, as well as being a useful diagnostic tool on its own. For instance, it can be used to read the IMU state:
# ./pi3hat_tool --read-att ATT w=0.999 x=0.013 y=-0.006 z=-0.029 dps=( 0.1, -0.1, -0.1) a=( 0.0, 0.0, 0.0)
And it can be used to write and read from the various CAN buses.
# ./pi3hat_tool --write-can 1,8001,1300,r \ --write-can 2,8004,1300,r \ --write-can 3,8007,1300,r CAN 1,100,2300000400 CAN 2,400,2300000400 CAN 3,700,230000fc00
You can also do those at the same time in a single bus cycle:
# ./pi3hat_tool --read-att --write-can 1,8001,1300,r CAN 1,100,2300000400 ATT w=0.183 x=0.692 y=0.181 z=-0.674 dps=( 0.1, -0.0, 0.1) a=(-0.0, 0.0,-0.0)
Next up I’ll demonstrate my performance testing setup, and what kind of performance you can expect in a typical system.
I’ve now got the last custom board from the quad A1 up in the mjbots store for sale, the mjbots pi3 hat for $129.
This board breaks out 4x 5Mbps CAN-FD ports, 1 low speed CAN port, a 1kHz IMU and a port for a nrf24l01. Despite its name, it works just fine with the Rasbperry Pi 4 in addition to the 3b+ I have tested with mostly to date. I also have a new user-space library for interfacing with it that I will document in some upcoming posts. That library makes it pretty easy to use in a variety of applications.
Finally, as is customary with these boards, I made a video “getting started” guide:
Only 1 full year after it was released, I managed to get a Raspberry Pi 4 and test it out in the quad A1. I had been delaying doing so because of reports of thermal issues. The Pi 3B+ already ran a little hot and I didn’t want to have to add active cooling into the robot chassis to get it stable.
It looks like the Raspberry Pi engineers have been hard at work because the newer firmware releases have significantly reduced the overall power consumption and thus the thermal load. In my testing so far it only seems “a little” hotter than the 3b+.
The now somewhat misnamed “pi3hat” worked just fine with the pi4, with some minor changes to the software to support the new peripheral base address from the bcm2711 SoC in the Pi 4.
This is part of a continuing series on updated diagnostic tools for the mjbots quad A1 robot. Previous editions are in 1, 2, 3, 4, 5, 6, and 7. Here I’ll be looking at one of the last pieces of the puzzle, synchronizing the video with the rest of the telemetry.
As mentioned previously, recording video of a robot running is an easy, cheap, and fast way to provide ground truth information on all of the sensors and actuators. However, it is only truly useful if it can be accurately synchronized in time to the other telemetry streams for the robot.
This was part of the puzzle that I spent a long time thinking about before I got started, as there are several possible options that seemed like they could maybe work:
The concept here would be to put an LED beacon on the robot that is visible from all angles. It could strobe a synchronizing pattern, like the output from an LFSR which could be identified in the subsequent video frames.
Pros: This should be able to give frame accurate synchronization, and works even for my 1000 fps camera which can’t record audio.
Cons: It is hard to find a good place to mount a light which could be observed from all angles. The top is the best bet, but I have plans to attach further things there, which would then render synchronization infeasible.
In this concept, I put a microphone on the robot and have it record audio of the environment during its run. Then standard audio synchronization algorithms can be used to align the two streams. I actually included a microphone on the most recent version of the pi3 hat to potentially use this approach.
Pros: This has no visibility requirements, and should be able to give synchronization accuracy well under a single frame of video.
Cons: Getting the microphone data off the pi3 hat was looking to be moderately annoying, as the STM32 which it is connected to is already streaming IMU and RF data back to the robot over its single SPI bus. When I brought up the board, I verified I could get 1kHz audio off, but that isn’t enough to be useful.
This was the idea I had last, and what I am using now. Here, I slap the side of the robot in a semi-random pattern during the video. That results in an audio signature in the video, as well as lateral accelerometer readings.
Pros: No additional hardware or software is required anywhere on the robot.
Cons: This has worse accuracy than pure audio, as the IMU is only sampled at 400Hz and doesn’t perfectly correspond to the audio found in the video.
I took a stab at the IMU version, since it looked to be the easiest and still gave decent performance. I made up a simple python tool which reads in the robot telemetry data, the audio stream of a video file, and lets the user select rough ranges for the audio and video streams to work from.
It then uses
scipy.signal.correlate to do its best job of finding an alignment that best matches both data streams, producing a plot of the alignment.
As you can see, the audio rings out for some time after the IMU stops its high frequency response, largely due to the mechanical damping of the robot. However, it is enough for the correlation to work with and give frame accurate results.
This post will be short, because it is just re-implementing the functionality I had in my turrets version 1 and 2, but this time using the raspberry pi as the master controller and two moteus controllers on each gimbal axis.
I have the raspberry pi running the primary control loop at 400Hz. At each time step it reads the IMU from the pi3 hat, and reads the current state of each servo (although it doesn’t actually use the servo state at the moment). It then runs a simple PID control loop on each axis, aiming to achieve a desired position and rate, which results in a torque command that is sent to each servo. Here’s the video proof!
Another of the tasks I’ve set for myself with regards to future Mech Warfare competitions is redesigning the turret. The previous turret I built had some novel technical features, such as active inertial gimbal stabilization and automatic optical target tracking, however it had some problems too. The biggest one for my purposes now, was that it still used the old RS485 based protocol and not the new CAN-FD based one. Second, the turret had some dynamic stability and rigidity issues. The magazine consisted of an aluminum tube sticking out of the top which made the entire thing very top heavy. The 3d printed fork is the same I one I had made at Shapeways 5 years ago. It is amazingly flexible in the lateral direction, which results in a lot of undesired oscillation if the base platform isn’t perfectly stable. I’ve learned a lot about 3d printing and mechanical design in the meantime (but of course still have a seemingly infinite amount more to learn!) and think I can do better. Finally, cable management between the top and bottom was always challenging. You want to have a large range of motion, but keeping power and data flowing between the two rotating sections was never easy.
My concept with this redesign is twofold, first make the turret be basically an entirely separate robot with no wires connecting it to the main robot and second, try to use as many of the components from the quad A1 as I could to demonstrate their, well, flexibility. Thus, this turret will have a separate battery, power distribution board, raspberry pi, pi3 hat, and a moteus controller for each axis of motion. These are certainly overkill, but hey, the quad A1 can carry a lot of weight.
The unique bits will be a standalone FPV camera, another camera attached to the raspberry PI for target tracking, a targeting laser, and the AEG mechanism, including a new board to manage the firing and loading functions.
And here’s a quick spin around video:
More to come…