Tag Archives: rpi

Improving the moteus update rate, part 2

Back in part 1, I looked at the driving factors that limited the update rate of the full quadruped.  Now in part 2, I’ll cover the first half of the solution.

Background

To begin with, there were two major paths that I could take based around the network topology.  In one path, I would remove the active bridging capability from the junction board, and rely on the Raspberry Pi to drive all the servos directly, and in the other the active bridge would stick around.  There were a number of key disadvantages to both approaches:

Passive bridge: In this model, the raspberry pi has no choice but to rapidly turn around 12 separate queries and responses.  There is no hope for parallelization.

Active bridge: Here, the junction board’s STM32 can offload the multiple queries.  However, there are two big downsides.  The first are that the data must flow across 2 separate 485 busses.  The second, and possibly more problematic, is that it only works out better if the junction board can stream a large amount of data consecutively to the raspberry pi.  In my previous experiments, I had run into what I believe was a kernel bug that killed the serial port until a power cycle upon receive overruns.  Debugging that could easily be a large project.  Implementing the active bridge would also be a lot of work, as I don’t currently have a protocol client that runs on the STM32.

My initial back of the envelope calculations, surprisingly, indicated that both approaches could potentially get up to 250 or 300Hz with sufficient margin to still do other things.  I had expected that parallelization would win the day, but it turned out that duplicating the data across both busses had the potential to completely negate that advantage.

Solution steps (first half)

1. RT_PREEMPT

The first thing I did was to switch to using a RT_PREEMPT enabled kernel with the governors set to performance.  This by itself reduced the Raspberry Pi’s reply to query turnaround from 200us to 100us.  I wanted to at this time also upgrade to the 4.19 kernel, as I hoped that it would have some fixes for high serial rates.  However, it doesn’t look like anyone has a RT enabled 4.19 kernel that supports USB and ethernet, both of which are moderately useful on a board with not many other interfaces.

2. “Passive” bridging

In lieu of spinning a new board right away, I instead modified the firmware of the junction board to remove almost all functionality and instead just busy loop shoving individual bytes around between the interfaces.  It is fast enough to just barely be able to achieve this at 3Mbit if interrupts are off.  The downside is that the IMU won’t be usable.  To fix that, I’ll have to spin a new board that just passively wires all the busses together and dangles the STM32 off the bus like any other node and hope that the star topology won’t matter over such short distances.

Doing this removed the double transmission penalty, as well as the additional 90us latency in the junction board on all transfers.  This, combined with RT_PREEMPT, got the overall cycle time down to about 6727us, or ~150Hz.

3. Controller turnaround time

Next, I started in on the moteus firmware, in order to improve the turnaround time between when a query is sent, and when the corresponding reply is generated.  This resulted in many optimizations throughout the code.  The first set were made to the primary control ISR, which runs at 30kHz currently.  Tiny improvements here make everything else run much faster.

  • Constructors: The ARM gcc toolsuite, regardless of the optimization level, seems to implement a constructor of an all zero structure as a call to memset.  This is true even if the structure is a small number of bytes.  There were several places in the firmware where a structure was zeroed out by calling its default constructor.  Switching that to a “clear” method which manually assigned all the fields drastically reduced the time spent there.
  • Appropriate types: When calculating a smoothed velocity, the filter was using an int64_t as an intermediate variable, which is not very efficient on an ARM.  Nothing more than an int32_t was actually needed.
  • fmod: I was wrapping between zero and two pi by using fmod.  The cases where wrapping occurs were never more than one or two phases off, so just dividing and truncating was much faster with no appreciable loss of precision.
  • Pre-computing some variables: Some of the calculations done in the ISR were repeated unnecessarily, so now they are on re-computed upon a configuration change.
  • Make debug output optional: The moteus board has a high rate RS422 debug output, which is only rarely used.  I added a configuration knob to turn it off when not in use.
  • SPI overhead: I was using the ST HAL API to access the position sensor over SPI.  I still do that for setup, but now just twiddle the raw registers to do the actual transfer, which saves a little bit of overhead in the HAL.
  • SPI frequency: The AS5047P has a nominal SPI frequency limit of 10MHz.  I asked the HAL for 10MHz, but the closest it could actually do was 6MHz.  I decided to brave some flakiness and upped it to 12MHz to shave another microsecond off.

After these changes, the ISR uses around 7-8us when stopped, and around 13us per iteration when in position control mode, or around 40% of the CPU budget.

Next I made more optimizations to the software which runs in the main loop:

  • StaticFunction: I was using a home-grown solution to a bounded size type-erased function callback that wasn’t particularly fast.  I switched it out for the SG14 inplace_function from github, modified to allow shrinking to a smaller type.  That sped up everything in the main loop by a fair amount.
  • Protocol parsing: A number of steps in the protocol parsing and emitting eventually delegated to a call to “memcpy” into a particular type.  I broke some abstractions so that all that parsing and emitting eventually compiles down to just loads and stores.
  • Reply encoding: The RS485 register protocol allows clients to query multiple consecutive registers.  As a shortcut, the server was responding to those with a series of single register replies.  I fixed that by implementing sending multiple replies at once to shave off a few bytes and some processing.

I experimented with using the STM32F446’s built in CRC unit, however it only implements a fixed CRC-32 variant.  So updating that would have required reflashing all my bootloaders.  Using that is probably best taken by updating to an STM32F7 or STM32G4 which have more configurable CRC units.  I also tried hand assembling an optimized version of the CCITT16 checksum I was using, but was only able to achieve the same 8 cycles per byte that boost already had.

With all of these changes, the average turnaround time for a single servo was down from 140us to in the 70-80us range for a full control update.

Next steps

Coming up, in part 3, I’ll cover the remainder of the steps I took to improve the overall update rate.

 

Improving the moteus update rate, part 1

The moteus brushless controller I’ve developed for the force controlled quadruped uses an RS485 based command-response communication protocol.  To complete a full control cycle, the controlling computer needs to send new commands to each servo and read the current state back from each of them.  While I designed the system to be capable of high rate all-system updates, my initial implementation took a lot of shortcuts.  The result being that for all my testing so far, the outgoing update rate has been 100Hz, but state read back from the servos has been more at like 10Hz.  Here I’ll cover my work to get that rate both symmetric, and higher.

In this first post, I’ll cover the existing design and how that drives the update rate limitations.

Individual contributors

There are many pieces that chain together to determine the overall cycle time.  Here is my best estimate of each.

RS485 bitrate

The RS485 protocol that I’m using right now runs at 3,000,000 baud half duplex.  That means it can push about 300k bytes per second in one direction or the other.  While the STM32 in the moteus has UARTs capable of going faster than that, control computers that can manage much faster than 3Mbit are rare, so without switching to another transport like ethernet, this is about as good as it will get.

This means that at a minimum, there is a latency associated with all transmissions associated with the amount of data, which is roughly bytes * 10 / 3000000.

Servo turnaround

The RS485 protocol moteus uses allows for unidirectional or bidirectional commands.  In past experiments, all the control commands were sent in a group as unidirectional commands, then the state was queried in a series of separate command-reply sequences.  The firmware of the moteus servo currently takes around 140us from when a command finishes transmission and the corresponding reply is started.  The ideal turnaround for a bare servo is then (txbytes * 10 / 3000000) + 140us + (rxbytes * 10 / 3000000).

IMU junction board

The current quadruped has a network topology that looks like:

imu-junction-block-diagram

The junction board is an STM32F4 processor that performs active bridging across the RS485 networks and also contains an IMU.  This topology was chosen so that the junction board could query both halves of the quadruped simultaneously, then send a single result back to the host computer.  However, that has not been implemented yet, thus all the junction board does is further increase the latency of a single command.  As implemented now, it adds about 90us of latency, plus the time required to transmit the command and reply packets a second time.  That makes the latency for a single command and reply now: 2 * (txbytes * 10 / 3000000) + 110us + 90us + 2 * (rxbytes * 10 / 3000000)

Raspberry pi command transmission

As mentioned, the current system first sends new commands to the servos, then updates their state.  When sending the new commands, the existing implementation makes a separate system call to initiate each servos output packet.  Sometimes the linux kernel groups those together into a single outgoing frame on the wire, but more often than not those commands ended up being separated by 120us of white space.  That adds 12 * 120us of additional latency to an overall update frame.  So, 12 * 120us = 1440us

Raspberry pi reply to query turnaround

During the phase when all 12 of the servos are being queried, after each query, the raspberry pi needs to receive the response then formulate and send another query.  This currently takes around 200us from when the reply finishes transmission until when the next query hits the wire.  This is some combination of hardware latency, kernel driver latency, and application latency.  It sums up to 200us * 12 = 2400us

Packet framing

The RS485 protocol used for moteus has some header and framing bytes, that are an overhead on every single command or response.  This is currently:

  • Leadin Framing: 2 bytes
  • Source ID: 1 byte
  • Destination ID: 1 byte
  • Payload Size: 1 byte for small things
  • Checksum: 2 bytes

That works out to a 7 byte overhead, which in the current formulation applies 12x for the command phase, and 48 times for the query phase.  12x for the raspberry pi sending, 12x for the junction board sending, and 24x for the combined receive side.  That makes a total of (12 + 48) * 7 = 420 bytes * 10 / 3000000 = 1400us

Data encoding

In the current control mode of the servo, a number of different parameters are typically updated every control cycle:

  • Target angle
  • Target velocity
  • Maximum torque
  • Feedforward torque
  • Proportional control constant
  • Not to exceed angle (only used during open loop startup)

The servo protocol allows each of these values to be encoded on the wire as either a 4 byte floating point value, or as a fixed point signed integer of either 4, 2, or 1 bytes.  The current implementation sends all 6 of these values every time as 4 byte floats.  Additionally two bytes are required to denote which parameters are being sent.  That works out to: ((6 parameters * 4 byte float + 2) * 12 servos * 2 for junction board * 10) / 3000000 = 2080us

The receive side returns the following:

  • Current angle
  • Current velocity
  • Current torque
  • Voltage
  • Temperature
  • Fault code

And in the current implementation all of those are either sent as a 4 byte float, or a 4 byte integer.  That makes ((6 parameters * 4 bytes + 2) * 12 servos * 2 for junction board * 10) / 3000000 = 2080us

Overall result

I put together a spreadsheet that let me tweak each of the individual parameters and see how that affected the overall update rate of the system.

I made a dedicated test program and used the oscilloscope to monitor a cycle and roughly verified these results:

overall-latency

Thus, with a full command and query cycle, an update rate of about 80Hz can be achieved with the current system.

Next up, working to make this much better.

 

Pocket NC Raspberry Pi Wifi Bridge

The primary UI the Pocket NC presents is a web interface accessible over a virtual USB based ethernet port.  I wanted to be able to run mine not immediately near an ethernet jack, but also didn’t want to have to tote a laptop over every time to check on it.  I had plenty of raspberry pi’s lying around, so rigged one up as a wifi bridge.

First, I found a random case to print from thingiverse, the TurboPi:

turbopi

Then I gave it a fixed IP address on my wifi network and set up IP forwarding with NAT.

iptables -A POSTROUTING -o eth1 -j MASQUERADE

Then, I saved that config:

iptables-save > /etc/iptables.ipv4.nat

And restored them in /etc/rc.local

iptables-restore < /etc/iptables.ipv4.nat

Finally, I enabled forwarding in /etc/sysctl.conf

net.ipv4.ip_forward=1

Finally, I added a route on my desktop with the raspberry pi’s address as the gateway.  Now I can transparently access the PocketNC from anywhere!

dsc_0154
Raspberry Pi forwarding network traffic to the Pocket NC

rpi3 interface board

Now that I have a chassis that can walk a little bit, I need to get the onboard computer working.  To do that, I needed to update the raspberry pi 3 daughterboard I built for the previous turret for the new bus voltage and communication format.

The rpi3’s UART is incapable of controlling the direction line on the RS485 transceiver, so I added a small STM32 micro in line to control the transmit / receive direction.  It adds a little bit of latency, but testing the firmware I was able to get it down to only a byte’s worth or so.

Here’s a quick render:

rpi3_hat_r2

And a shot of the actual board:

dsc_1703

Unfortunately, I managed to omit a necessary pullup on the enable line of the primary buck converter.  To get things moving along, I blue-wired it under the microscope:

dsc_1710dsc_1707

And here’s a shot of it installed on the rpi3 b+:

dsc_1716

Sources:

bazel for gstreamer – plan

After OpenCV, the other major dependency the mjmech software has, which is necessary to complete the raspberry pi 3b+ bazel build setup, was gstreamer. Unlike the previous dependencies, this one is a doozy — gstreamer has an enormous transitive dependency set. Additionally, we needed to use its ffmpeg wrappers, which brings in more dependencies.

In this post, I’ll just try to map out the dependencies that we ended up actually needing, so that they can be tackled one by one.

gstreamer core

The core gstreamer base, actually has a minimal dependency set, only really glib. glib in turn has a small dependency set, just libffi, pcre, and zlib.

gst-plugins-base

Most of the functionality within the gstreamer ecosystem is contained within “plugins”, i.e. modules of software which implement a constrained interface and can be installed into media pipelines. All plugins use the common routines defined within gst-plugins-base, and a few low level plugins are defined here. This begins the dependency explosion, as some of these base plugins start to call into non-trivial pieces of the free desktop graphical stack. Direct dependencies here include libx11, libxv, from x.org, and pango.

In turn those packages have a large fanout. libx11 and libxv depend upon videoproto, libxext, xorgproto, libxcb, libxau, libxdcmp, xtrans, and xcb-proto. pango depends upon freetype, harfbuzz, fribidi, cairo, bzip2, freetype, pixman, fontconfig, util-linux, and expat.

gst-plugins-good and gst-plugins-bad

mjmech doesn’t rely on any of the plugins in gst-plugins-good, so nothing was necessary there! We do use some from gst-plugins-bad, but those have no additional dependencies.

gst-plugins-ugly

Here, mjmech uses the x264 plugin, which in turn depends on the x264 external library.

gst-libav

These are the plugins which wrap ffmpeg (nee libav, nee ffmpeg). ffmpeg in turn depends additional on nasm to assemble for x86.

Next steps

Now that we’ve enumerated all the dependent packages, we can start to work on packaging them. First up are those necessary for gstreamer core, so libffi, pcre, and zlib.

bazel for opencv

The next level of difficulty in bazel-ifying packages for mjmech was opencv.

First, for the impatient, Apache 2.0 licensed sources are available on github: https://github.com/mjbots/bazel_deps/tree/master/tools/workspace/opencv

OpenCV’s native build system consists of nearly 200 cmake files with over 20,000 total lines of code, plus assorted helper scripts and prototype files which are substituted into.  Fortunately, I didn’t need to support the full complexity of the opencv build system.  Things I didn’t bother to touch:

  • Autodetecting the location of any packages:  Since this is embedded within a bazel project, all the dependencies I was going to use were already bazel-ified.
  • GPU support:  I didn’t bother with CUDA, OpenCL, or anything else GPU, as the GPU on the raspberry pi3 has only a minimally supported OpenCL stack, at best.
  • Examples: I had no real need to build any of the sample or demonstration applications.
  • Modules: I only needed, for now, a small fraction of the total opencv module set.

Intended result

OpenCV is broken up into numerous “modules”, each of which is largely independent and implements a relatively self-contained piece of functionality.  Each module can depend upon other opencv modules and other external packages.  I wanted to reach a point with the bazel configuration where each module could be described at a relatively high level, which didn’t seem too implausible since the module structure is relatively constant across modules.

Think something like:

cvmodule(
  name = "core",
)

cvmodule(
  name = "imgproc",
  deps = [":core"],
)

# ... etc

Bazel implementation

To achieve this in bazel requires three phases: first, a repository rule to actually download the opencv tarball, second something written in Starlark which could implement a cvmodule like rule, and finally a BUILD file equivalent to call it and enumerate the opencv modules I cared about.

The repository rule was relatively straightforward.  It just uses the standard bazel mechanisms to download a tarball and template expand a known BUILD file into place.

The Starlark file is where all the real work lies:

First, we just define a common set of preprocessor defines which will be applied to all opencv translation units:

_OPENCV_COPTS = [
    "-D_USE_MATH_DEFINES",
    "-D__OPENCV_BUILD=1",
    "-D__STDC_CONSTANT_MACROS",
    "-D__STDC_FORMAT_MACROS",
    "-D__STDC_LIMIT_MACROS",
    "-I$(GENDIR)/external/opencv/private/",
]

The only real interesting piece there is the $(GENDIR) fragment, which causes bazel to substitute in the path to the generated output directory. Since a number of the headers that we will be using later on are generated as part of the bazel build process, we need to ensure that the compiler can actually find them.  The alternate formulation which doesn’t require knowledge of GENDIR is to list it in the includes attribute.  That however, exposes the headers to all downstream modules, which we don’t want.

Next, we enumerate the set of processor specific options that we will support:

_KNOWN_OPTS = [
    ("neon", "armeabihf"),
    ("vfpv3", "armeabihf"),
    ("avx", "x86_64"),
    ("avx2", "x86_64"),
    ("avx512_skx", "x86_64"),
    # ...

OpenCV has support for a comprehensive set of special instruction set extensions for a variety of different processors. They can either be compiled in and always used, or dispatched at runtime, although these bazel rules ignore runtime dispatching entirely. Some of the generated headers need to know the full possible set of options for a given architecture, thus this list here.

Module definition macros

Now we can start getting into the module definition Starlark macros.  To handle some generated headers that are common to every opencv module, I ended up creating an opencv_base macro that is module independent.  It needs to be called from the BUILD file, to generate a number of the private headers, but exposes no logically public labels.  It does require a passed in “configuration” dictionary (which will also be passed in to the primary module generation macro).

CONFIG = {
    "modules" : [
        "aruco",
        "calib3d",
        "core",
        "imgcodecs",
        "imgproc",
    ],
# ...

The big annoyance here is that you have to enumerate the set of modules twice.  Once in that config, and again in the set of invocations to the module creation macro.  The reason is the generated opencv_modules.hpp file, which provides preprocessor defines describing which modules are included in the build.

Next is the implementation of the module generation macro, which I ended up calling opencv_module.  It needs just a few arguments for the modules I ended up converting:

  • name – should be self-evident
  • config – the same configuration dictionary from above
  • excludes – source files to skip compiling from this module
  • dispatched_files – the list of files in this module which contain processor specific specializations
  • deps – a set of bazel labels describing the dependency set

This macro goes about generating the appropriate .simd_declarations.hpp and .simd.hpp files, a non-functional stub OpenCL implementation, and then goes on to define the actual bazel cc_library.

Resulting BUILD file

At this point, the BUILD file can glue all these things together. It ended up being not too far away from my initial ideal:

opencv_base(config=CONFIG)

opencv_module(
    name = "core",
    config = CONFIG,
    dispatched_files = {
        "stat" : ["sse4_2", "avx"],
        "mathfuncs_core" : ["sse2", "avx", "avx2"],
    },
    deps = ["@eigen"],
)

opencv_module(
    name = "imgproc",
    config = CONFIG,
    dispatched_files = {
        "accum" : ["sse2", "avx", "neon"],
    },
    deps = [":core"],
)

# ...

Download

As mentioned at the start, all the source are available on github:

bazel-ifying simple autoconf packages

This is part N in a series describing how I created the bazel infrastructure to build all the third party packages for mjmech.  Previously we have:

We left off with the first, very simple packages configured to build with bazel.  In this installment we will tackle those that require at least minimal configuration, i.e. those that have some files which are normally generated as part of the build process.

snappy / template_file

snappy (https://github.com/google/snappy), a file compression library, is largely a pure C++ project, but it does have a single generated header file.  There are a few possible options within bazel to handle this.  The simplest would be to use a macro, although that has complications with respect to namespacing and label resolution.  Here, I created a custom rule, called “template_file”.

As seen in the source on github the interface is relatively simple:

template_file(
  src,
  is_executable,
  substitutions,
  substitution_list,
)

There are two possible forms for passing in substitutions, either a string_dict when using “substitutions”, or a list of KEY=VALUE strings in “substitution_list”.  The two forms are present because starlark can make it awkward to deal with dictionaries, so often working with a list is much more convenient.

The snappy build rule uses this to generate snappy-stubs-public.h:

template_file(
    name = "snappy-stubs-public.h",
    src = "snappy-stubs-public.h.in",
    substitutions = {
        "${HAVE_STDINT_H_01}": "1",
        "${HAVE_STDDEF_H_01}": "1",
        "${HAVE_SYS_UIO_H_01}": "1",
        "${SNAPPY_MAJOR}": "1",
        "${SNAPPY_MINOR}": "1",
        "${SNAPPY_PATCHLEVEL}": "7",
    },
)

log4cpp / autoconf

Next up in the difficulty level is log4cpp, which uses autoconf as its native build system.  autoconf typically has a config.h.in file, defined in a semi-standardized format, mostly comprised of lines like:

/* Define to 1 if you have the  header file. */
#undef HAVE_STRING_H

The pre-processing stage turns each of those lines into either an actual #define, or just a a comment saying that it continues to be undefined.

That is substantive enough that the simple built-in bazel template rule won’t cut it though.  For this, we split the work into two pieces, 1) a python helper script to do the actual formatting and 2) a starlark bazel rule which defaults in some common autoconf defines.  For the bazel_deps repository, the bazel autoconf rule has basically every autoconf define that is shared by more than one project in it, to reduce duplication and make it more likely that all the dependencies are configured consistently.

With the two of them together, the resulting BUILD file defines a cc_library as normal, but it depends upon the new autoconf rule like found in the log4cpp instance:

autoconf_config(
    name = "include/log4cpp/config.h",
    src = "include/config.h.in",
    package = "log4cpp",
    version = "1.1.3",
    defines = autoconf_standard_defines + [
        "DISABLE_SMTP",
        "DISABLE_REMOTE_SYSLOG",
    ],
    prefix = "LOG4CPP_",
)

First bazel-ified packages

In “Building mjmech dependencies with bazel“, I described my rationale as it were for attempting to build all of the mjmech dependencies within bazel for cross compilation onto the raspberry pi.  mjmech has two big dependencies which were going to cause most of the transitive fallout:

  • gstreamer – We use gstreamer to interface with the webcam, format RTSP streams for FPV on the control station, and to render the control station and heads up display.  Granted, not all of gstreamer is used, but we do depend on features that require ffmpeg and X11.
  • opencv – The use of opencv had been minimal to non-existant previously, as we hadn’t actually done any computer vision on the robot itself.  However, one of the big motivations for switching to the raspberry pi in the first place was to at least to be able to do active target tracking onboard.

And then there are a few other direct dependencies that are “easy”, if nothing else because they have such few transitive dependencies.

  • boost – The use of boost is almost exclusively the header only parts, boost::date_time, boost::filesystem, boost::program_options, boost::test, and boost::python.  Of these, only boost::python has any transitive dependencies.
  • fmt – This text formatting library has no further dependencies whatsoever.
  • log4cpp – This is just used for writing textual debug output and has no transitive dependencies.
  • snappy – We use snappy to compress logged data, but it depends on nothing.

Simple packages

I started with the simple, no dependency packages from the second set.  The strategy here is to, for each package, create a tools/workspace/FOO/repository.bzl with a FOO_repository method to download the upstream tarball, and a corresponding tools/workspace/FOO/package.BUILD which contains the bazel BUILD file describing how to build that package.

The most straightforward package was “fmt”, from https://github.com/fmtlib/fmt.  It’s repository.bzl looks like:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

def fmt_repository(name):
    http_archive(
        name = name,
        urls = [
            "https://github.com/fmtlib/fmt/archive/5.0.0.tar.gz",
        ],
        sha256 = "fc33d64d5aa2739ad2ca1b128628a7fc1b7dca1ad077314f09affc57d59cf88a",
        strip_prefix = "fmt-5.0.0",
        build_file = Label("//tools/workspace/fmt:package.BUILD"),
    )

It basically just contains the URL of the tarball to download, its hash, the prefix to strip, and where to locate the BUILD file.

Next, the package.BUILD file is:

package(default_visibility = ["//visibility:public"])

cc_library(
    name = "fmt",
    hdrs = glob(["include/**"]),
    srcs = [
        "src/format.cc",
    ],
    includes = ["include"],
)

Since the fmt library is mostly header only, this has a single cc_library definition which compiles the one file and sets up the paths to correctly find the header files.

The other easiest package, boost, was handled in a similar manner.

Next level

Moving up the difficulty ladder is snappy and log4cpp, both of which require at least minimally more complicated build rules.  That will be the topic for the next time.

 

Building mjmech dependencies with bazel

Previously, I set up bazel to be able to cross compile for the raspberry pi using an extracted sysroot.  That sysroot was very minimal, basically just glibc and the kernel headers.  The software used for SMMB has many dependencies beyond that though, including some heavyweight ones such as gstreamer and I needed some solution for building against them.

Options

There were two basic options:

  1. Install all the dependencies I cared about on an actual raspberry pi, and extract them into the sysroot.
  2. Build all the dependencies I cared about using bazel’s external projects mechanism.

The former would certainly be quicker in the short term, at the expense of needing to check in or otherwise version a very large sysroot.  It would also be annoying to update, as I would need to keep around a physical raspberry pi and continually reset it to zero in order to generate suitably pristine sysroots.

The second option had the benefit of requiring a small version control footprint — just the URLs and hashes of each project, along with suitable bazel build configuration.  It is also perfectly compatible with a fully hermetic build result.  However, it had the significant downside that I would need to write the bazel build configuration for all the transitive dependencies of the project.

I decided to take a stab at the second route, partly because of the benefits, but also to see just how hard it would be.

Structure

Since bazel does not yet have recursive WORKSPACE parsing I went with a structure that is used in the Drake open source project.  The top level project has a tools/workspace directory that contains one sub-directory for each dependency or transitive dependency.  Within that directory is a default.bzl that contains one exported function, add_default_repositories. It is intended to be called from the top level WORKSPACE file, and it creates bazel rules for all necessary external dependencies.

The drake project doesn’t support cross compilation, so most of their BUILD rules are of the form, “grab the pre-compiled flags from pkg-config”.  However, the same structure will work just fine even with non-trivial compilation steps.

mjbots/bazel_deps

So that this could be easily used across multiple dependent projects, I put all the resultant rules into a new github project: https://github.com/mjbots/bazel_deps  Like the drake repository, it has a default.bzl with a single export, add_default_repositories that is used by dependent projects.

Next up, working through actually packaging the dependencies!

Building mjmech software on the rpi 3b+ with bazel

The first piece I tackled when switching to the Raspberry Pi 3B+ for Super Mega Microbot was building our existing control software.  The software we used for the 2016 Robogames is largely C++ and was built with SConshttps://github.com/mjbots/mjmech/.  For all our previous platforms for both Savage Solder and SMMB, we had just built the software on the device itself, which while a little slow, was certainly convenient and required very little sophistication from the build system.  Raspian is debian based, so this shouldn’t be hard, right?

“apt” was my friend and at least got the compiler and all the build requirements installed.  Then I went to build and discovered that some of our C++ translation units required more than 1G of RAM individually to compile.  Ouch.  That rendered building on the target with the current software infeasible.  Either I could rewrite the software to be less compilation time RAM intensive, or switch to a cross compilation model.

bazel-icon

Building on the platform was always somewhat slow, so I decided to try and take the leap and get a proper cross compilation environment set up.  I’ve done that in SCons before, but it was never pretty, as SCons doesn’t have the best mechanisms for expressing things that need to be built on the host vs the target, and describing any given cross compilation tool chain is always challenging.  Plus SCons itself is slow for any moderately sized project.  Instead, I opted to give bazel (https://bazel.build) a try.  I’ve used it professionally for a while now, and while it is a mixed bag, I am an eternal optimist when it comes to its feature set.  Right now, versus SCons, it can support:

  1. Specifying multiple C++ toolchains
  2. Switching between a single host toolchain and a single target toolchain
  3. Gracefully pulling in external projects controlled by hash
  4. It runs locally very fast, can parallelize across large numbers of cores, and finally has a functioning content addressable cache

The two big features which have seemingly been very close for a long time now, but aren’t quite there yet are:

  1. Remote execution
  2. Arbitrary C++ toolchain configuration within a single build (i.e. build binaries for multiple target platforms as part of a single build)

Next up, I’ll describe how I configured bazel with the cross toolchain necessary to build binaries on the raspberry pi 3 (b+).