Tag Archives: pi3hat

New cross-platform moteus tools!

After receiving many requests via youtube, discord, and email, I’ve finally gone ahead, bitten the bullet, and updated all of the moteus tools to be pure python and work in a cross platform manner. Now, the only thing you need to do to install pre-compiled versions of tview and moteus tool on most* platforms is:

pip3 install moteus_gui
python3 -m moteus_gui.tview    # (or maybe just tview)
python3 -m moteus.moteus_tool  # (or maybe just moteus_tool)

I’ve personally tested these on Linux, Windows, and Raspberry Pi, and others have at least verified basic operation on Macs. Python 3.7 or greater is required.

….

But wait, there’s more!

Now, both moteus_tool, tview, and the python bindings more generally can use python-can as a transport. That means tview can now be used with socketcan, pcan, and a bunch of other options. To one up that, most users won’t have to even specify any command line options, as tview and moteus tool will automatically select a fdcanusb or python-can depending upon what is available.

I’ll be updating the devkit introduction video soon, although the commands in there will largely continue working for the time being.

Errata

  • Neither pypi or piwheels has pyside2 for the Raspberry Pi, but it is packaged in Raspberry Pi OS. You can follow the instructions in git to find a recipe that works.
  • To use the pi3hat, you need to also do pip3 install moteus_pi3hat

moteus and socketcan

Various users have been trying to use lower-cost Raspberry Pi CAN-FD adapters for the moteus controller for some time (like this one from Seeed), but have had problems getting communication to work. I buckled down and went to debug the problem, discovering that the root of the issue was that the linux kernel socketcan subsystem calculates very sub-optimal CAN timings for the 5Mbps bitrate that moteus uses. This results in the adapters being unable to receive frames sent at the actual 5Mbps rate, but instead only slightly slower.

The solution is to manually specify the bus timings when configuring the socketcan link. This makes the MCP2518FD boards work, and also PEAK-CAN-FD USB adapters (and probably every other socketcan CAN-FD adapter) work as well. You can find the timings linked in the moteus reference documentation: https://github.com/mjbots/moteus/blob/main/docs/reference.md#bit-timings

General socketcan improvements

As a result of all this debugging, I made some general improvements to socketcan support in all the client side moteus tools.

  1. There is now a documented commandline for invoking moteus_tool from socketcan: https://github.com/mjbots/moteus/blob/main/docs/reference.md#moteus_tool-configuration
  2. I released moteus and moteus_pi3hat 0.2.0 to pypi. These provide socketcan interfaces for python, transparently using them if no fdcanusb or pi3hat peripherals are found.

Thanks for everyone on discord’s patience as we worked through these compatibility issues!

Automated wire stripper and cutter

Over the Thanksgiving day holiday, I knew I had a bunch of harnesses to build. Rather than being a good corporate steward and actually building them, I instead built a machine to automate the first of the 3 time consuming parts of the harness construction: wire cutting and stripping.

This was just thrown together from two cosmetically damaged moteus devkits, a Raspberry Pi 3 an old development version of a pi3hat, a hand wire stripper, two synthetic rubber bands, an off the shelf 24V supply, and a bunch of 3d printed parts.

Why?

Simple automated wire management at the DIY level is not new. It’s been done many, many, many times before. YouTube has decided that every day I need to see someone else’s take on the problem. Look down in the resources at the bottom for my collection of alternate solutions.

What differentiates this version is (1) I built it most from junk parts I had around, (2) since it uses brushless motors it can be both very fast and very precise. Here’s a clip of it executing a few cycles where it strips 3mm from the front end, pre-cuts 3mm from the other end, then cuts the wire to a total length of 5cm. The overall cycle time for all operations is around 1s per wire for the 30cm wires I needed right now.

By replacing the guides and doing some tuning, it should be capable of managing wire between 30 AWG and 18 AWG, although to date I’ve only tested it on 26 AWG.

It did take a bit longer than the weekend — I printed a second revision of everything early the following week, then waited for a panel mount switch to make the power supply look more professional.

Video

Here’s the overview video, with some more shots of it in operation.

Resources

The BOM, .3mf’s and source code are in github at https://github.com/jpieper/bstrip. There is a hackaday page here for discussion: https://hackaday.io/project/176211-bstrip-wire-cutstrip

Maybe someone else will find it useful?

Other DIY-style solutions

pip3 install moteus

I’m excited to announce new python bindings for communicating with moteus controllers! A simple example from the README:

import asyncio
import math
import moteus

async def main():
  c = moteus.Controller()
  print(await c.set_position(position=math.nan, query=True))
  await asyncio.sleep(1.0)

asyncio.run(main())

This code will try to locate an fdcanusb on your host and use it to communicate with controller with ID 1. All of those details can be customized through code depending upon how you construct things. The library is pure python, although it doesn’t work on Windows currently because it relies on an asyncio aware pyserial wrapper that doesn’t work there.

At the same time, there is a parallel python library “moteus-pi3hat” which only has an armv7l package. This provides an identical API for working with the pi3hat on a Raspberry Pi. It lets you configure which controllers are attached to which bus (by default it assumes everything is on bus #1). After setting that up you can use an identical API to command and monitor the controllers.

Thanks to everyone in discord who helped test!

Measuring the pi3hat r4.2 performance

Last time I covered the new software library that I wrote to help use all the features of the pi3hat, in an efficient manner. This time, I’ll cover how I measured the performance of the result, and talk about how it can be integrated into a robotic control system.

pi3hat r4.2 available at mjbots.com

Test Setup

To check out the timing, I wired up a pi3hat into the quad A1 and used the oscilloscope to probe one of the SPI clocks and CAN bus 1 and 3.

Then, I could use pi3hat_tool incantations to experiment with different bus utilization strategies and get to one with the best performance. The sequence that I settled on was:

  1. Write all outgoing CAN messages, using a round-robin strategy between CAN buses. The SPI bus rate of 10Mhz is faster than the 5Mbps maximum CAN-FD rate, so this gets each bus transmitting its first packet as soon as possible, then queues up the remainder.
  2. Read the IMU. During this phase, any replies over CAN are being enqueued on the individual STM32 processors.
  3. Optionally read CAN replies. If any outgoing packets were marked as expecting a reply, that bus is expected to receive the appropriate number of responses. Additionally, a bus can be requested to “get anything in the queue”.

With this approach, a full command and query of the comprehensive state of 12 qdd100 servos, and reading the IMU takes around 740us. If you perform that on one thread while performing robot control on others, it allows you to achieve a 1kHz update rate.

CAN1 SPI clock on bottom, CAN1 and CAN3 bus on top

These results were with the Raspberry Pi 3b+. On a Raspberry Pi 4, they seem to be about 5% better, mostly because the Pi 4’s faster CPU is able to execute the register twiddling a little faster, which reduces dead time on the SPI bus.

Bringing up the pi3hat r4.2

The pi3hat r4.2, now in the mjbots store, has only minor hardware changes from the r4 and r4.1 versions. What has changed in a bigger way is the firmware, and the software that is available to interface with it. The interface software for the previous versions was tightly coupled to the quad A1s overall codebase, that made it basically impossible to use with without significant rework. So, that rework is what I’ve done with the new libpi3hat library:

It consists of a single C++11 header and source file with no dependencies aside from the standard C++ library and bcm_host.h from the Raspberry Pi firmware. You can build it using the bazel build files, or just copy the source file into your own project and build with whatever system you are using.

Performance

Using all of the pi3hat’s features in a runtime performant way can be challenging, but libpi3hat makes it not so bad by providing an omnibus call which sequences accesses to all the CAN buses and peripherals in a way that maximizes pipelining and overlap between the different operations, while simultaneously maximizing the usage of the SPI bus. The downside is that it does not use the linux kernel drivers for SPI and thus requires root access to run. For most robotic applications, that isn’t a problem, as the controlling computer is doing nothing but control anyways.

This design makes it feasible to operate at least 12 servos and read the IMU at rates over 1kHz on a Raspberry Pi.

pi3hat_tool

There is a command line tool, pi3hat_tool which provides a demonstration of how to use all the features of the library, as well as being a useful diagnostic tool on its own. For instance, it can be used to read the IMU state:

# ./pi3hat_tool --read-att
ATT w=0.999 x=0.013 y=-0.006 z=-0.029  dps=(  0.1, -0.1, -0.1) a=( 0.0, 0.0, 0.0)

And it can be used to write and read from the various CAN buses.

# ./pi3hat_tool --write-can 1,8001,1300,r \
                --write-can 2,8004,1300,r \
                --write-can 3,8007,1300,r
CAN 1,100,2300000400
CAN 2,400,2300000400
CAN 3,700,230000fc00

You can also do those at the same time in a single bus cycle:

# ./pi3hat_tool --read-att --write-can 1,8001,1300,r
CAN 1,100,2300000400
ATT w=0.183 x=0.692 y=0.181 z=-0.674  dps=(  0.1, -0.0,  0.1) a=(-0.0, 0.0,-0.0)

Next steps

Next up I’ll demonstrate my performance testing setup, and what kind of performance you can expect in a typical system.

New product Monday: pi3hat

I’ve now got the last custom board from the quad A1 up in the mjbots store for sale, the mjbots pi3 hat for $129.

This board breaks out 4x 5Mbps CAN-FD ports, 1 low speed CAN port, a 1kHz IMU and a port for a nrf24l01. Despite its name, it works just fine with the Rasbperry Pi 4 in addition to the 3b+ I have tested with mostly to date. I also have a new user-space library for interfacing with it that I will document in some upcoming posts. That library makes it pretty easy to use in a variety of applications.

Finally, as is customary with these boards, I made a video “getting started” guide:

Raspberry Pi 4

Only 1 full year after it was released, I managed to get a Raspberry Pi 4 and test it out in the quad A1. I had been delaying doing so because of reports of thermal issues. The Pi 3B+ already ran a little hot and I didn’t want to have to add active cooling into the robot chassis to get it stable.

It looks like the Raspberry Pi engineers have been hard at work because the newer firmware releases have significantly reduced the overall power consumption and thus the thermal load. In my testing so far it only seems “a little” hotter than the 3b+.

The now somewhat misnamed “pi3hat” worked just fine with the pi4, with some minor changes to the software to support the new peripheral base address from the bcm2711 SoC in the Pi 4.

Yes, you can see the USB3 ports there

Video and telemetry synchronization (diagnostics part 8)

This is part of a continuing series on updated diagnostic tools for the mjbots quad A1 robot.  Previous editions are in 1, 2, 3, 4, 5, 6, and 7.  Here I’ll be looking at one of the last pieces of the puzzle, synchronizing the video with the rest of the telemetry.

As mentioned previously, recording video of a robot running is an easy, cheap, and fast way to provide ground truth information on all of the sensors and actuators.  However, it is only truly useful if it can be accurately synchronized in time to the other telemetry streams for the robot.

Options

This was part of the puzzle that I spent a long time thinking about before I got started, as there are several possible options that seemed like they could maybe work:

Visual

The concept here would be to put an LED beacon on the robot that is visible from all angles.  It could strobe a synchronizing pattern, like the output from an LFSR which could be identified in the subsequent video frames.

Pros: This should be able to give frame accurate synchronization, and works even for my 1000 fps camera which can’t record audio.

Cons: It is hard to find a good place to mount a light which could be observed from all angles.  The top is the best bet, but I have plans to attach further things there, which would then render synchronization infeasible.

Audio

In this concept, I put a microphone on the robot and have it record audio of the environment during its run.  Then standard audio synchronization algorithms can be used to align the two streams.  I actually included a microphone on the most recent version of the pi3 hat to potentially use this approach.

Pros: This has no visibility requirements, and should be able to give synchronization accuracy well under a single frame of video.

Cons: Getting the microphone data off the pi3 hat was looking to be moderately annoying, as the STM32 which it is connected to is already streaming IMU and RF data back to the robot over its single SPI bus.  When I brought up the board, I verified I could get 1kHz audio off, but that isn’t enough to be useful.

IMU

This was the idea I had last, and what I am using now.  Here, I slap the side of the robot in a semi-random pattern during the video.  That results in an audio signature in the video, as well as lateral accelerometer readings.

Pros: No additional hardware or software is required anywhere on the robot.

Cons: This has worse accuracy than pure audio, as the IMU is only sampled at 400Hz and doesn’t perfectly correspond to the audio found in the video.

Implementation

I took a stab at the IMU version, since it looked to be the easiest and still gave decent performance.  I made up a simple python tool which reads in the robot telemetry data, the audio stream of a video file, and lets the user select rough ranges for the audio and video streams to work from.

It then uses scipy.signal.correlate to do its best job of finding an alignment that best matches both data streams, producing a plot of the alignment.

20200515-video_aligner

As you can see, the audio rings out for some time after the IMU stops its high frequency response, largely due to the mechanical damping of the robot.  However, it is enough for the correlation to work with and give frame accurate results.