Tag Archives: rpi

rpi3 interface board

Now that I have a chassis that can walk a little bit, I need to get the onboard computer working.  To do that, I needed to update the raspberry pi 3 daughterboard I built for the previous turret for the new bus voltage and communication format.

The rpi3’s UART is incapable of controlling the direction line on the RS485 transceiver, so I added a small STM32 micro in line to control the transmit / receive direction.  It adds a little bit of latency, but testing the firmware I was able to get it down to only a byte’s worth or so.

Here’s a quick render:

rpi3_hat_r2

And a shot of the actual board:

dsc_1703

Unfortunately, I managed to omit a necessary pullup on the enable line of the primary buck converter.  To get things moving along, I blue-wired it under the microscope:

dsc_1710dsc_1707

And here’s a shot of it installed on the rpi3 b+:

dsc_1716

Sources:

bazel for gstreamer – plan

After OpenCV, the other major dependency the mjmech software has, which is necessary to complete the raspberry pi 3b+ bazel build setup, was gstreamer. Unlike the previous dependencies, this one is a doozy — gstreamer has an enormous transitive dependency set. Additionally, we needed to use its ffmpeg wrappers, which brings in more dependencies.

In this post, I’ll just try to map out the dependencies that we ended up actually needing, so that they can be tackled one by one.

gstreamer core

The core gstreamer base, actually has a minimal dependency set, only really glib. glib in turn has a small dependency set, just libffi, pcre, and zlib.

gst-plugins-base

Most of the functionality within the gstreamer ecosystem is contained within “plugins”, i.e. modules of software which implement a constrained interface and can be installed into media pipelines. All plugins use the common routines defined within gst-plugins-base, and a few low level plugins are defined here. This begins the dependency explosion, as some of these base plugins start to call into non-trivial pieces of the free desktop graphical stack. Direct dependencies here include libx11, libxv, from x.org, and pango.

In turn those packages have a large fanout. libx11 and libxv depend upon videoproto, libxext, xorgproto, libxcb, libxau, libxdcmp, xtrans, and xcb-proto. pango depends upon freetype, harfbuzz, fribidi, cairo, bzip2, freetype, pixman, fontconfig, util-linux, and expat.

gst-plugins-good and gst-plugins-bad

mjmech doesn’t rely on any of the plugins in gst-plugins-good, so nothing was necessary there! We do use some from gst-plugins-bad, but those have no additional dependencies.

gst-plugins-ugly

Here, mjmech uses the x264 plugin, which in turn depends on the x264 external library.

gst-libav

These are the plugins which wrap ffmpeg (nee libav, nee ffmpeg). ffmpeg in turn depends additional on nasm to assemble for x86.

Next steps

Now that we’ve enumerated all the dependent packages, we can start to work on packaging them. First up are those necessary for gstreamer core, so libffi, pcre, and zlib.

bazel for opencv

The next level of difficulty in bazel-ifying packages for mjmech was opencv.

First, for the impatient, Apache 2.0 licensed sources are available on github: https://github.com/mjbots/bazel_deps/tree/master/tools/workspace/opencv

OpenCV’s native build system consists of nearly 200 cmake files with over 20,000 total lines of code, plus assorted helper scripts and prototype files which are substituted into.  Fortunately, I didn’t need to support the full complexity of the opencv build system.  Things I didn’t bother to touch:

  • Autodetecting the location of any packages:  Since this is embedded within a bazel project, all the dependencies I was going to use were already bazel-ified.
  • GPU support:  I didn’t bother with CUDA, OpenCL, or anything else GPU, as the GPU on the raspberry pi3 has only a minimally supported OpenCL stack, at best.
  • Examples: I had no real need to build any of the sample or demonstration applications.
  • Modules: I only needed, for now, a small fraction of the total opencv module set.

Intended result

OpenCV is broken up into numerous “modules”, each of which is largely independent and implements a relatively self-contained piece of functionality.  Each module can depend upon other opencv modules and other external packages.  I wanted to reach a point with the bazel configuration where each module could be described at a relatively high level, which didn’t seem too implausible since the module structure is relatively constant across modules.

Think something like:

cvmodule(
  name = "core",
)

cvmodule(
  name = "imgproc",
  deps = [":core"],
)

# ... etc

Bazel implementation

To achieve this in bazel requires three phases: first, a repository rule to actually download the opencv tarball, second something written in Starlark which could implement a cvmodule like rule, and finally a BUILD file equivalent to call it and enumerate the opencv modules I cared about.

The repository rule was relatively straightforward.  It just uses the standard bazel mechanisms to download a tarball and template expand a known BUILD file into place.

The Starlark file is where all the real work lies:

First, we just define a common set of preprocessor defines which will be applied to all opencv translation units:

_OPENCV_COPTS = [
    "-D_USE_MATH_DEFINES",
    "-D__OPENCV_BUILD=1",
    "-D__STDC_CONSTANT_MACROS",
    "-D__STDC_FORMAT_MACROS",
    "-D__STDC_LIMIT_MACROS",
    "-I$(GENDIR)/external/opencv/private/",
]

The only real interesting piece there is the $(GENDIR) fragment, which causes bazel to substitute in the path to the generated output directory. Since a number of the headers that we will be using later on are generated as part of the bazel build process, we need to ensure that the compiler can actually find them.  The alternate formulation which doesn’t require knowledge of GENDIR is to list it in the includes attribute.  That however, exposes the headers to all downstream modules, which we don’t want.

Next, we enumerate the set of processor specific options that we will support:

_KNOWN_OPTS = [
    ("neon", "armeabihf"),
    ("vfpv3", "armeabihf"),
    ("avx", "x86_64"),
    ("avx2", "x86_64"),
    ("avx512_skx", "x86_64"),
    # ...

OpenCV has support for a comprehensive set of special instruction set extensions for a variety of different processors. They can either be compiled in and always used, or dispatched at runtime, although these bazel rules ignore runtime dispatching entirely. Some of the generated headers need to know the full possible set of options for a given architecture, thus this list here.

Module definition macros

Now we can start getting into the module definition Starlark macros.  To handle some generated headers that are common to every opencv module, I ended up creating an opencv_base macro that is module independent.  It needs to be called from the BUILD file, to generate a number of the private headers, but exposes no logically public labels.  It does require a passed in “configuration” dictionary (which will also be passed in to the primary module generation macro).

CONFIG = {
    "modules" : [
        "aruco",
        "calib3d",
        "core",
        "imgcodecs",
        "imgproc",
    ],
# ...

The big annoyance here is that you have to enumerate the set of modules twice.  Once in that config, and again in the set of invocations to the module creation macro.  The reason is the generated opencv_modules.hpp file, which provides preprocessor defines describing which modules are included in the build.

Next is the implementation of the module generation macro, which I ended up calling opencv_module.  It needs just a few arguments for the modules I ended up converting:

  • name – should be self-evident
  • config – the same configuration dictionary from above
  • excludes – source files to skip compiling from this module
  • dispatched_files – the list of files in this module which contain processor specific specializations
  • deps – a set of bazel labels describing the dependency set

This macro goes about generating the appropriate .simd_declarations.hpp and .simd.hpp files, a non-functional stub OpenCL implementation, and then goes on to define the actual bazel cc_library.

Resulting BUILD file

At this point, the BUILD file can glue all these things together. It ended up being not too far away from my initial ideal:

opencv_base(config=CONFIG)

opencv_module(
    name = "core",
    config = CONFIG,
    dispatched_files = {
        "stat" : ["sse4_2", "avx"],
        "mathfuncs_core" : ["sse2", "avx", "avx2"],
    },
    deps = ["@eigen"],
)

opencv_module(
    name = "imgproc",
    config = CONFIG,
    dispatched_files = {
        "accum" : ["sse2", "avx", "neon"],
    },
    deps = [":core"],
)

# ...

Download

As mentioned at the start, all the source are available on github:

bazel-ifying simple autoconf packages

This is part N in a series describing how I created the bazel infrastructure to build all the third party packages for mjmech.  Previously we have:

We left off with the first, very simple packages configured to build with bazel.  In this installment we will tackle those that require at least minimal configuration, i.e. those that have some files which are normally generated as part of the build process.

snappy / template_file

snappy (https://github.com/google/snappy), a file compression library, is largely a pure C++ project, but it does have a single generated header file.  There are a few possible options within bazel to handle this.  The simplest would be to use a macro, although that has complications with respect to namespacing and label resolution.  Here, I created a custom rule, called “template_file”.

As seen in the source on github the interface is relatively simple:

template_file(
  src,
  is_executable,
  substitutions,
  substitution_list,
)

There are two possible forms for passing in substitutions, either a string_dict when using “substitutions”, or a list of KEY=VALUE strings in “substitution_list”.  The two forms are present because starlark can make it awkward to deal with dictionaries, so often working with a list is much more convenient.

The snappy build rule uses this to generate snappy-stubs-public.h:

template_file(
    name = "snappy-stubs-public.h",
    src = "snappy-stubs-public.h.in",
    substitutions = {
        "${HAVE_STDINT_H_01}": "1",
        "${HAVE_STDDEF_H_01}": "1",
        "${HAVE_SYS_UIO_H_01}": "1",
        "${SNAPPY_MAJOR}": "1",
        "${SNAPPY_MINOR}": "1",
        "${SNAPPY_PATCHLEVEL}": "7",
    },
)

log4cpp / autoconf

Next up in the difficulty level is log4cpp, which uses autoconf as its native build system.  autoconf typically has a config.h.in file, defined in a semi-standardized format, mostly comprised of lines like:

/* Define to 1 if you have the  header file. */
#undef HAVE_STRING_H

The pre-processing stage turns each of those lines into either an actual #define, or just a a comment saying that it continues to be undefined.

That is substantive enough that the simple built-in bazel template rule won’t cut it though.  For this, we split the work into two pieces, 1) a python helper script to do the actual formatting and 2) a starlark bazel rule which defaults in some common autoconf defines.  For the bazel_deps repository, the bazel autoconf rule has basically every autoconf define that is shared by more than one project in it, to reduce duplication and make it more likely that all the dependencies are configured consistently.

With the two of them together, the resulting BUILD file defines a cc_library as normal, but it depends upon the new autoconf rule like found in the log4cpp instance:

autoconf_config(
    name = "include/log4cpp/config.h",
    src = "include/config.h.in",
    package = "log4cpp",
    version = "1.1.3",
    defines = autoconf_standard_defines + [
        "DISABLE_SMTP",
        "DISABLE_REMOTE_SYSLOG",
    ],
    prefix = "LOG4CPP_",
)

First bazel-ified packages

In “Building mjmech dependencies with bazel“, I described my rationale as it were for attempting to build all of the mjmech dependencies within bazel for cross compilation onto the raspberry pi.  mjmech has two big dependencies which were going to cause most of the transitive fallout:

  • gstreamer – We use gstreamer to interface with the webcam, format RTSP streams for FPV on the control station, and to render the control station and heads up display.  Granted, not all of gstreamer is used, but we do depend on features that require ffmpeg and X11.
  • opencv – The use of opencv had been minimal to non-existant previously, as we hadn’t actually done any computer vision on the robot itself.  However, one of the big motivations for switching to the raspberry pi in the first place was to at least to be able to do active target tracking onboard.

And then there are a few other direct dependencies that are “easy”, if nothing else because they have such few transitive dependencies.

  • boost – The use of boost is almost exclusively the header only parts, boost::date_time, boost::filesystem, boost::program_options, boost::test, and boost::python.  Of these, only boost::python has any transitive dependencies.
  • fmt – This text formatting library has no further dependencies whatsoever.
  • log4cpp – This is just used for writing textual debug output and has no transitive dependencies.
  • snappy – We use snappy to compress logged data, but it depends on nothing.

Simple packages

I started with the simple, no dependency packages from the second set.  The strategy here is to, for each package, create a tools/workspace/FOO/repository.bzl with a FOO_repository method to download the upstream tarball, and a corresponding tools/workspace/FOO/package.BUILD which contains the bazel BUILD file describing how to build that package.

The most straightforward package was “fmt”, from https://github.com/fmtlib/fmt.  It’s repository.bzl looks like:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

def fmt_repository(name):
    http_archive(
        name = name,
        urls = [
            "https://github.com/fmtlib/fmt/archive/5.0.0.tar.gz",
        ],
        sha256 = "fc33d64d5aa2739ad2ca1b128628a7fc1b7dca1ad077314f09affc57d59cf88a",
        strip_prefix = "fmt-5.0.0",
        build_file = Label("//tools/workspace/fmt:package.BUILD"),
    )

It basically just contains the URL of the tarball to download, its hash, the prefix to strip, and where to locate the BUILD file.

Next, the package.BUILD file is:

package(default_visibility = ["//visibility:public"])

cc_library(
    name = "fmt",
    hdrs = glob(["include/**"]),
    srcs = [
        "src/format.cc",
    ],
    includes = ["include"],
)

Since the fmt library is mostly header only, this has a single cc_library definition which compiles the one file and sets up the paths to correctly find the header files.

The other easiest package, boost, was handled in a similar manner.

Next level

Moving up the difficulty ladder is snappy and log4cpp, both of which require at least minimally more complicated build rules.  That will be the topic for the next time.

 

Building mjmech dependencies with bazel

Previously, I set up bazel to be able to cross compile for the raspberry pi using an extracted sysroot.  That sysroot was very minimal, basically just glibc and the kernel headers.  The software used for SMMB has many dependencies beyond that though, including some heavyweight ones such as gstreamer and I needed some solution for building against them.

Options

There were two basic options:

  1. Install all the dependencies I cared about on an actual raspberry pi, and extract them into the sysroot.
  2. Build all the dependencies I cared about using bazel’s external projects mechanism.

The former would certainly be quicker in the short term, at the expense of needing to check in or otherwise version a very large sysroot.  It would also be annoying to update, as I would need to keep around a physical raspberry pi and continually reset it to zero in order to generate suitably pristine sysroots.

The second option had the benefit of requiring a small version control footprint — just the URLs and hashes of each project, along with suitable bazel build configuration.  It is also perfectly compatible with a fully hermetic build result.  However, it had the significant downside that I would need to write the bazel build configuration for all the transitive dependencies of the project.

I decided to take a stab at the second route, partly because of the benefits, but also to see just how hard it would be.

Structure

Since bazel does not yet have recursive WORKSPACE parsing I went with a structure that is used in the Drake open source project.  The top level project has a tools/workspace directory that contains one sub-directory for each dependency or transitive dependency.  Within that directory is a default.bzl that contains one exported function, add_default_repositories. It is intended to be called from the top level WORKSPACE file, and it creates bazel rules for all necessary external dependencies.

The drake project doesn’t support cross compilation, so most of their BUILD rules are of the form, “grab the pre-compiled flags from pkg-config”.  However, the same structure will work just fine even with non-trivial compilation steps.

mjbots/bazel_deps

So that this could be easily used across multiple dependent projects, I put all the resultant rules into a new github project: https://github.com/mjbots/bazel_deps  Like the drake repository, it has a default.bzl with a single export, add_default_repositories that is used by dependent projects.

Next up, working through actually packaging the dependencies!

Building mjmech software on the rpi 3b+ with bazel

The first piece I tackled when switching to the Raspberry Pi 3B+ for Super Mega Microbot was building our existing control software.  The software we used for the 2016 Robogames is largely C++ and was built with SConshttps://github.com/mjbots/mjmech/.  For all our previous platforms for both Savage Solder and SMMB, we had just built the software on the device itself, which while a little slow, was certainly convenient and required very little sophistication from the build system.  Raspian is debian based, so this shouldn’t be hard, right?

“apt” was my friend and at least got the compiler and all the build requirements installed.  Then I went to build and discovered that some of our C++ translation units required more than 1G of RAM individually to compile.  Ouch.  That rendered building on the target with the current software infeasible.  Either I could rewrite the software to be less compilation time RAM intensive, or switch to a cross compilation model.

bazel-icon

Building on the platform was always somewhat slow, so I decided to try and take the leap and get a proper cross compilation environment set up.  I’ve done that in SCons before, but it was never pretty, as SCons doesn’t have the best mechanisms for expressing things that need to be built on the host vs the target, and describing any given cross compilation tool chain is always challenging.  Plus SCons itself is slow for any moderately sized project.  Instead, I opted to give bazel (https://bazel.build) a try.  I’ve used it professionally for a while now, and while it is a mixed bag, I am an eternal optimist when it comes to its feature set.  Right now, versus SCons, it can support:

  1. Specifying multiple C++ toolchains
  2. Switching between a single host toolchain and a single target toolchain
  3. Gracefully pulling in external projects controlled by hash
  4. It runs locally very fast, can parallelize across large numbers of cores, and finally has a functioning content addressable cache

The two big features which have seemingly been very close for a long time now, but aren’t quite there yet are:

  1. Remote execution
  2. Arbitrary C++ toolchain configuration within a single build (i.e. build binaries for multiple target platforms as part of a single build)

Next up, I’ll describe how I configured bazel with the cross toolchain necessary to build binaries on the raspberry pi 3 (b+).

 

Raspberry Pi 3 B+ for SMMB

Super Mega Microbot, that beloved and neglected creation, is due for a facelift. The biggest challenge we had at the last competition was the instability of the USB bus on the odroid-U2 when we had both a USB camera and USB 5GHz wifi adapter attached. Cue 2.5 years of waiting, and one aborted attempt, and it looks like the problem is solved!

The aborted attempt

The challenge in this problem is that almost no single board computers in the odroid-ish form factor have both:

  1. a non-USB camera option that works
  2. integrated 5GHz wifi, or any kind of high speed interface that would allow for a non-USB based 5GHz wifi

There are many contenders which have one or the other, or a nominal camera interface, but the board support package that is released to amateurs doesn’t support it.  Not only that, almost no boards have any high speed interfaces except USB, which means there aren’t even options for doing anything better.

For a moment though in 2017 I thought I had the problem solved with the introduction of the Intel Joule.  On paper it ticked all the boxes, dual camera ports, with software that worked, integrated 5GHz wifi, a supported GPU, and on paper enough of a community that support would be not an issue.  The only downside was that as a system on module, it required a fair amount of a carrier board to be able to actually use it in an end application.  That said, I did try, and built up a carrier board to be able to mount it in the turret of SMMB.

However, I wasn’t actually able to get the Joule to boot on this carrier board, despite it matching the reference board schematic in every way I could check.  To double down on the failure, Intel discontinued the Joule shortly after I had the prototype carrier board in hand which, unsurprisingly, reduced my incentive to try and get it working.

More promising

rpi3bp

Lo and behold, with sufficient time, comes the announcement of the Raspberry Pi 3 Model B+.  On paper it solves nearly every problem as well as the Joule, including needing much less support from a carrier board to be functional.

Pluses:

  1. Onboard 5Ghz wifi
  2. Camera port, with off the shelf camera modules and functional software
  3. Onboard ethernet (although through USB, sigh)
  4. Onboard serial which can run at high data rates (>= 1Mbps)
  5. Stock debian based linux
  6. A production guarantee until 2023!

Downsides:

  1. Not quite as fast as the Joule or Odroid
  2. The GPU doesn’t support any form of GPGPU very easily
  3. Only 1G of RAM

I ordered some and got to work, with results that are definitely more promising, although not without their share of stumbles and pitfalls, and it will definitely take more than one post to describe.  So… more for next time.