Tag Archives: rpi

First bazel-ified packages

In “Building mjmech dependencies with bazel“, I described my rationale as it were for attempting to build all of the mjmech dependencies within bazel for cross compilation onto the raspberry pi.  mjmech has two big dependencies which were going to cause most of the transitive fallout:

  • gstreamer – We use gstreamer to interface with the webcam, format RTSP streams for FPV on the control station, and to render the control station and heads up display.  Granted, not all of gstreamer is used, but we do depend on features that require ffmpeg and X11.
  • opencv – The use of opencv had been minimal to non-existant previously, as we hadn’t actually done any computer vision on the robot itself.  However, one of the big motivations for switching to the raspberry pi in the first place was to at least to be able to do active target tracking onboard.

And then there are a few other direct dependencies that are “easy”, if nothing else because they have such few transitive dependencies.

  • boost – The use of boost is almost exclusively the header only parts, boost::date_time, boost::filesystem, boost::program_options, boost::test, and boost::python.  Of these, only boost::python has any transitive dependencies.
  • fmt – This text formatting library has no further dependencies whatsoever.
  • log4cpp – This is just used for writing textual debug output and has no transitive dependencies.
  • snappy – We use snappy to compress logged data, but it depends on nothing.

Simple packages

I started with the simple, no dependency packages from the second set.  The strategy here is to, for each package, create a tools/workspace/FOO/repository.bzl with a FOO_repository method to download the upstream tarball, and a corresponding tools/workspace/FOO/package.BUILD which contains the bazel BUILD file describing how to build that package.

The most straightforward package was “fmt”, from https://github.com/fmtlib/fmt.  It’s repository.bzl looks like:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

def fmt_repository(name):
    http_archive(
        name = name,
        urls = [
            "https://github.com/fmtlib/fmt/archive/5.0.0.tar.gz",
        ],
        sha256 = "fc33d64d5aa2739ad2ca1b128628a7fc1b7dca1ad077314f09affc57d59cf88a",
        strip_prefix = "fmt-5.0.0",
        build_file = Label("//tools/workspace/fmt:package.BUILD"),
    )

It basically just contains the URL of the tarball to download, its hash, the prefix to strip, and where to locate the BUILD file.

Next, the package.BUILD file is:

package(default_visibility = ["//visibility:public"])

cc_library(
    name = "fmt",
    hdrs = glob(["include/**"]),
    srcs = [
        "src/format.cc",
    ],
    includes = ["include"],
)

Since the fmt library is mostly header only, this has a single cc_library definition which compiles the one file and sets up the paths to correctly find the header files.

The other easiest package, boost, was handled in a similar manner.

Next level

Moving up the difficulty ladder is snappy and log4cpp, both of which require at least minimally more complicated build rules.  That will be the topic for the next time.

 

Building mjmech dependencies with bazel

Previously, I set up bazel to be able to cross compile for the raspberry pi using an extracted sysroot.  That sysroot was very minimal, basically just glibc and the kernel headers.  The software used for SMMB has many dependencies beyond that though, including some heavyweight ones such as gstreamer and I needed some solution for building against them.

Options

There were two basic options:

  1. Install all the dependencies I cared about on an actual raspberry pi, and extract them into the sysroot.
  2. Build all the dependencies I cared about using bazel’s external projects mechanism.

The former would certainly be quicker in the short term, at the expense of needing to check in or otherwise version a very large sysroot.  It would also be annoying to update, as I would need to keep around a physical raspberry pi and continually reset it to zero in order to generate suitably pristine sysroots.

The second option had the benefit of requiring a small version control footprint — just the URLs and hashes of each project, along with suitable bazel build configuration.  It is also perfectly compatible with a fully hermetic build result.  However, it had the significant downside that I would need to write the bazel build configuration for all the transitive dependencies of the project.

I decided to take a stab at the second route, partly because of the benefits, but also to see just how hard it would be.

Structure

Since bazel does not yet have recursive WORKSPACE parsing I went with a structure that is used in the Drake open source project.  The top level project has a tools/workspace directory that contains one sub-directory for each dependency or transitive dependency.  Within that directory is a default.bzl that contains one exported function, add_default_repositories. It is intended to be called from the top level WORKSPACE file, and it creates bazel rules for all necessary external dependencies.

The drake project doesn’t support cross compilation, so most of their BUILD rules are of the form, “grab the pre-compiled flags from pkg-config”.  However, the same structure will work just fine even with non-trivial compilation steps.

mjbots/bazel_deps

So that this could be easily used across multiple dependent projects, I put all the resultant rules into a new github project: https://github.com/mjbots/bazel_deps  Like the drake repository, it has a default.bzl with a single export, add_default_repositories that is used by dependent projects.

Next up, working through actually packaging the dependencies!

Building mjmech software on the rpi 3b+ with bazel

The first piece I tackled when switching to the Raspberry Pi 3B+ for Super Mega Microbot was building our existing control software.  The software we used for the 2016 Robogames is largely C++ and was built with SConshttps://github.com/mjbots/mjmech/.  For all our previous platforms for both Savage Solder and SMMB, we had just built the software on the device itself, which while a little slow, was certainly convenient and required very little sophistication from the build system.  Raspian is debian based, so this shouldn’t be hard, right?

“apt” was my friend and at least got the compiler and all the build requirements installed.  Then I went to build and discovered that some of our C++ translation units required more than 1G of RAM individually to compile.  Ouch.  That rendered building on the target with the current software infeasible.  Either I could rewrite the software to be less compilation time RAM intensive, or switch to a cross compilation model.

bazel-icon

Building on the platform was always somewhat slow, so I decided to try and take the leap and get a proper cross compilation environment set up.  I’ve done that in SCons before, but it was never pretty, as SCons doesn’t have the best mechanisms for expressing things that need to be built on the host vs the target, and describing any given cross compilation tool chain is always challenging.  Plus SCons itself is slow for any moderately sized project.  Instead, I opted to give bazel (https://bazel.build) a try.  I’ve used it professionally for a while now, and while it is a mixed bag, I am an eternal optimist when it comes to its feature set.  Right now, versus SCons, it can support:

  1. Specifying multiple C++ toolchains
  2. Switching between a single host toolchain and a single target toolchain
  3. Gracefully pulling in external projects controlled by hash
  4. It runs locally very fast, can parallelize across large numbers of cores, and finally has a functioning content addressable cache

The two big features which have seemingly been very close for a long time now, but aren’t quite there yet are:

  1. Remote execution
  2. Arbitrary C++ toolchain configuration within a single build (i.e. build binaries for multiple target platforms as part of a single build)

Next up, I’ll describe how I configured bazel with the cross toolchain necessary to build binaries on the raspberry pi 3 (b+).

 

Raspberry Pi 3 B+ for SMMB

Super Mega Microbot, that beloved and neglected creation, is due for a facelift. The biggest challenge we had at the last competition was the instability of the USB bus on the odroid-U2 when we had both a USB camera and USB 5GHz wifi adapter attached. Cue 2.5 years of waiting, and one aborted attempt, and it looks like the problem is solved!

The aborted attempt

The challenge in this problem is that almost no single board computers in the odroid-ish form factor have both:

  1. a non-USB camera option that works
  2. integrated 5GHz wifi, or any kind of high speed interface that would allow for a non-USB based 5GHz wifi

There are many contenders which have one or the other, or a nominal camera interface, but the board support package that is released to amateurs doesn’t support it.  Not only that, almost no boards have any high speed interfaces except USB, which means there aren’t even options for doing anything better.

For a moment though in 2017 I thought I had the problem solved with the introduction of the Intel Joule.  On paper it ticked all the boxes, dual camera ports, with software that worked, integrated 5GHz wifi, a supported GPU, and on paper enough of a community that support would be not an issue.  The only downside was that as a system on module, it required a fair amount of a carrier board to be able to actually use it in an end application.  That said, I did try, and built up a carrier board to be able to mount it in the turret of SMMB.

However, I wasn’t actually able to get the Joule to boot on this carrier board, despite it matching the reference board schematic in every way I could check.  To double down on the failure, Intel discontinued the Joule shortly after I had the prototype carrier board in hand which, unsurprisingly, reduced my incentive to try and get it working.

More promising

rpi3bp

Lo and behold, with sufficient time, comes the announcement of the Raspberry Pi 3 Model B+.  On paper it solves nearly every problem as well as the Joule, including needing much less support from a carrier board to be functional.

Pluses:

  1. Onboard 5Ghz wifi
  2. Camera port, with off the shelf camera modules and functional software
  3. Onboard ethernet (although through USB, sigh)
  4. Onboard serial which can run at high data rates (>= 1Mbps)
  5. Stock debian based linux
  6. A production guarantee until 2023!

Downsides:

  1. Not quite as fast as the Joule or Odroid
  2. The GPU doesn’t support any form of GPGPU very easily
  3. Only 1G of RAM

I ordered some and got to work, with results that are definitely more promising, although not without their share of stumbles and pitfalls, and it will definitely take more than one post to describe.  So… more for next time.