Tag Archives: quad

Video and telemetry synchronization (diagnostics part 8)

This is part of a continuing series on updated diagnostic tools for the mjbots quad A1 robot.  Previous editions are in 1, 2, 3, 4, 5, 6, and 7.  Here I’ll be looking at one of the last pieces of the puzzle, synchronizing the video with the rest of the telemetry.

As mentioned previously, recording video of a robot running is an easy, cheap, and fast way to provide ground truth information on all of the sensors and actuators.  However, it is only truly useful if it can be accurately synchronized in time to the other telemetry streams for the robot.

Options

This was part of the puzzle that I spent a long time thinking about before I got started, as there are several possible options that seemed like they could maybe work:

Visual

The concept here would be to put an LED beacon on the robot that is visible from all angles.  It could strobe a synchronizing pattern, like the output from an LFSR which could be identified in the subsequent video frames.

Pros: This should be able to give frame accurate synchronization, and works even for my 1000 fps camera which can’t record audio.

Cons: It is hard to find a good place to mount a light which could be observed from all angles.  The top is the best bet, but I have plans to attach further things there, which would then render synchronization infeasible.

Audio

In this concept, I put a microphone on the robot and have it record audio of the environment during its run.  Then standard audio synchronization algorithms can be used to align the two streams.  I actually included a microphone on the most recent version of the pi3 hat to potentially use this approach.

Pros: This has no visibility requirements, and should be able to give synchronization accuracy well under a single frame of video.

Cons: Getting the microphone data off the pi3 hat was looking to be moderately annoying, as the STM32 which it is connected to is already streaming IMU and RF data back to the robot over its single SPI bus.  When I brought up the board, I verified I could get 1kHz audio off, but that isn’t enough to be useful.

IMU

This was the idea I had last, and what I am using now.  Here, I slap the side of the robot in a semi-random pattern during the video.  That results in an audio signature in the video, as well as lateral accelerometer readings.

Pros: No additional hardware or software is required anywhere on the robot.

Cons: This has worse accuracy than pure audio, as the IMU is only sampled at 400Hz and doesn’t perfectly correspond to the audio found in the video.

Implementation

I took a stab at the IMU version, since it looked to be the easiest and still gave decent performance.  I made up a simple python tool which reads in the robot telemetry data, the audio stream of a video file, and lets the user select rough ranges for the audio and video streams to work from.

It then uses scipy.signal.correlate to do its best job of finding an alignment that best matches both data streams, producing a plot of the alignment.

20200515-video_aligner

As you can see, the audio rings out for some time after the IMU stops its high frequency response, largely due to the mechanical damping of the robot.  However, it is enough for the correlation to work with and give frame accurate results.

3D rendering in tplot (diagnostics part 7)

In previous posts of this series, I covered some diagnostics improvements I’ve made to help work on more advanced gaits for the mjbots quad A1 (1, 2, 3, 4, 5, 6).  This post will cover the last major new piece of diagnostics I added to tplot2, 3d rendering of telemetry data.

3D rendering

While it should be obvious, I’ll give a little exposition.  tplot2 in its state prior to this could show a “tree view” of all data logged in numeric form.  It had a “plot view” which let you plot any single floating point scalar vs time.  As of recently, it could also render video associated with a given point in time in the log.  However, as anyone who has ever tried to debug a 3d dimensional software application, much less a 3d dimensional robot, can attest, debugging with scalar numbers and time plots is only productive for a very limited range of problems.

I’ve been wanting to extend my plotting tools with 3d rendering for some time, and now have gotten around to a minimal first pass.  The logic itself isn’t terribly complicated.  A separate GL Framebuffer object is created in order to render into a texture, then pretty standard GL vertex and fragment shaders are used to render some triangles and lines.  Initially, I’m just doing the robot body, the commanded and actual feet positions, speeds, and forces, and an estimate of the ground underneath them.

20200515-tplot2-3drender

20200515-tplot2-3drender-video

While there is a lot of room for improvement here, both in terms of the visual quality of the existing renderings, and new features that could be rendered, this is already proving itself to be invaluable in diagnosing longstanding problems with the gait motion.

Updated serialization library (diagnostics part 1)

Now that I have the qdd100 servo in beta phase, the IMU working at full rate, and the quad A1 is moving around I’m getting closer to actually working to improve the gaits that the machine can execute.  To date, the gaits I have used completely ignore the IMU and only use the feedback from the joints in order to maintain force in 3D.  With tuning and on controlled surfaces this can work well, but if you go outside the happy regime, then it can undergo significant pitch and roll movements during the leg swing phase, which at best results in a janky walk, and at worst results in oscillation or outright instability.

There are also a number of as-yet-unidentified problems that seemingly cause the feet to not track the ground position properly, resulting in the feet slipping on the floor despite being nearly fully loaded.

To tackle all these new domains requires some improvements to my diagnostics infrastructure and tools.  I’ll cover the improvements I’ve made in a few posts, since the work that has gone into it has covered a fair amount of ground.  I’ll start with something I mostly completed back in the summer of 2019 and has the least direct impact, but gives at least a background for some of the other upcoming changes.

Telemetry format

Super Mega Microbot since its inception in 2014 used a self-describing serialization and telemetry format that was loosely based on work I had done professionally previously at Bluefin Robotics and then Jaybridge Robotics.  This format was then the basis for later work at Jaybridge and Toyota Research Institute.  The basic idea breaks down like this:

  • The schema which describes the data and the data are separate entities
  • The schema is recorded alongside the data whenever it is written to persistent storage
  • The schema contains sufficient information to reconstruct a CSV or JSON like representation of the data with no additional meta-data
  • Structure tools can map a given on disk-schema to a possibly different in-memory one using a schema evolution algorithm
  • The data is serialized and stored in a manner which is very efficient to write at high rates from realtime processes

Compared to other serialization mechanisms, this has different trade-offs.

  • Formats like JSON, XML, either completely include the schema in each data instance, or include a large amount of self-describing information in each data instance that is not strictly necessary to represent it
  • Formats like protobuf, capnproto, flatbuffers, and SBE have a different tradeoff.  They are geared towards performance, but largely also assume a single canonical source of schema data that is shared through an independent side channel and has a single linear revision history.  This makes sense for server RPC, where client and server are each distributed (possibly different) versions of the schema and want to communicate without having to exchange it.  They also include more metadata in the data stream than is strictly required many of them are more expensive to serialize or deserialize.
  • The closest to this work is Apache AVRO.  It uses the same principle of separate schema and data, and expects the schema to be stored alongside the data.  It also requires no code generation, which many of the above tools do require.

The unique pieces in this work over AVRO are that:

  • The data format is such that many common in-memory structures can simply be bit copied as serialized data with no further effort.  Those that do require some manipulation still require no additional in-memory structures associated with serialization.  This combines the properties of protobuf in that the serialization objects can be used as mutable state, with those of capnproto that allows zero cost serialization.
  • No recursion or pointers are supported, which renders the necessary code very simple.  The entirety of the C++ serialization and deserialization library is only a few hundred lines of code and took less than a week overall to write, unit test, and debug over the 6 years I’ve been using it.  It also functions perfectly fine in microcontroller-based embedded environments like the moteus controller.
  • The on-disk format is designed for rapid random seek access in time, assuming that small-ish records are written regularly.

The downsides are that it isn’t widely supported, isn’t optimized to handle single structures which have very large serialized representations, and the only language bindings aside from C++ are read only ones for python and TypeScript.

In future articles, I’ll describe a bit of the detail of the recently revised design, then go into the tools that use it.

 

Quad feet construction fixture

The quad A1 was the first robot I built with foam cast feet.  When I did the first feet, I jury rigged a fixture from some old toilet paper rolls to hold things in place while they were curing.  When I went to rebuild with my most recent leg geometry, I figured it was time to get at least a little more serious.  Thus, my new leg casting fixture:

dsc_0578

When an insert is cast into place, it is set on one of the trays, the tray is inserted into a slot, and then a weight can be placed on top and constrained by the fixture.

dsc_0579

This makes the casting process more repeatable and faster as I scale up production.  As a bonus, it can also be used as a fixture to epoxy the lower leg to the insert:

dsc_0591

quad A1 chassis updates

I finally got around to fixing a number of minor glitches in the quad A1’s chassis recently.

1. The raspberry pi is now far enough away from the left panel that you can connect the HDMI if you choose.

20200506-rpi_mounting

2. I no longer have vestigal studs for the pre quad A0 junction board on the other side.

20200506-power_dist

3. The switch got moved down to between the legs.

dsc_0631

4. So that the entire top surface can be used for mounting things if necessary (note the additional inserts at 160mm diameter).

dsc_0632

5. And finally I added a shielding on the inside to cover up the guts on the left and right side.

dsc_0633

dsc_0634

Nothing too significant, but I had a running list and it was getting long enough that I figured it made sense to finally knock them off.

 

Leg zeroing fixture

As part of provisioning a quad A1, or anytime the mechanical configuration has been changed, I need to go and record where the zero position of all the joints is.  The “0” position for the software now is with the shoulders perfectly horizontal, and the upper and lower leg sticking straight down.

Up until now, every time I’ve done this it has just been by eyeballing and with lots of foam and bubble wrap to shim things into place long enough to record the level.  Sometimes I had to go back and try a few times, as even determining when something is straight is not, well, straightforward.

So, I made two new fixtures to help with this process:

dsc_0608

One that rests on a flat surface and supports the shoulder to be exactly level, and forces the upper leg to be exactly at a 90 degree angle.  This assumes the robot is flipped over on the same flat surface.  The second snaps between the upper and lower leg, forcing them to be exactly straight.

With these two fixtures, I was able to get repeatability of my calibrations down to less than half a degree, which should be good enough for now.

 

New Mech Warfare turret

Another of the tasks I’ve set for myself with regards to future Mech Warfare competitions is redesigning the turret.  The previous turret I built had some novel technical features, such as active inertial gimbal stabilization and automatic optical target tracking, however it had some problems too.  The biggest one for my purposes now, was that it still used the old RS485 based protocol and not the new CAN-FD based one.  Second, the turret had some dynamic stability and rigidity issues.  The magazine consisted of an aluminum tube sticking out of the top which made the entire thing very top heavy.  The 3d printed fork is the same I one I had made at Shapeways 5 years ago.  It is amazingly flexible in the lateral direction, which results in a lot of undesired oscillation if the base platform isn’t perfectly stable.  I’ve learned a lot about 3d printing and mechanical design in the meantime (but of course still have a seemingly infinite amount more to learn!) and think I can do better.  Finally, cable management between the top and bottom was always challenging.  You want to have a large range of motion, but keeping power and data flowing between the two rotating sections was never easy.

dsc_0529
The legacy turret

My concept with this redesign is twofold, first make the turret be basically an entirely separate robot with no wires connecting it to the main robot and second, try to use as many of the components from the quad A1 as I could to demonstrate their, well, flexibility.  Thus, this turret will have a separate battery, power distribution board, raspberry pi, pi3 hat, and a moteus controller for each axis of motion.  These are certainly overkill, but hey, the quad A1 can carry a lot of weight.

The unique bits will be a standalone FPV camera, another camera attached to the raspberry PI for target tracking, a targeting laser, and the AEG mechanism, including a new board to manage the firing and loading functions.

20200423-turret-angle-view
A static rendering

And here’s a quick spin around video:

More to come…

Updated quad pi3 hat

I made a number of tweaks to the quad A1’s raspberry pi hat to get it ready for production, resulting in r4.1 of the board:

dsc_0491

None of the changes were particularly big, but each has some value:

  • The correct switch mode regulator is installed.
  • The auxiliary CAN transceiver was switched to one that supports a larger common mode voltage.  This will allow it to be connected to the power distribution board without smoking.
  • Each of the STM32s now has some GPIO pins connected directly to GPIOs on the raspberry PI primarily to be used for interrupts.
  • Pin headers expose a few gpio pins from each STM32 for interfacing with random external things.
  • The NRF radio module changed orientation and has improved power filtering.
  • I added a microphone to the auxiliary STM32.  The goal is to eventually be able to use that to synchronize external video with onboard data collected during operation more easily.

I’ll bring this up in a future post!

quad A1 leg updates

When I first designed the full rotation leg, I didn’t fully appreciate the importance of torque in the knee joint.  Despite the fact that my first force based IK showed that when the legs are immediately under the body, the knee joint carries the entire load of the robot, I still managed to not add any reduction there.

The initial design used a 1:1 ratio, because that allowed me to use the same single piece 3d printed gear design I had used before.  A 28 tooth gear with 5mm pitch resulted in a gear that was larger than the output plate on the qdd100 servo, so it could just be bolted directly on.  To work with a smaller number of teeth, I had to split the gear into two parts, connected by pins, as the gear is now smaller than the qdd100 output plate.

20200415_old_knee_upper_pulley_28
The old knee pulley
20200415_new_knee_upper_pulley_18
And the new one

So that I could use the same belts, I extended the upper leg about 8mm, and while I was at it, extended the lower leg by 15mm to make the overall leg a bit more symmetric.

20200415_new_leg

We’ll see shortly how this works out when printed and assembled.

 

Overlaying video on telemetry data with ffmpeg and OpenGL

While not its primary purpose, I still plan on entering my walking robots in Mech Warfare events when I can.  In that competition, pilots operate the robots remotely, using FPV video feeds.  I eventually aim to get my inertially stabilized turret working again, and when it is working I would like to be able to overlay the telemetry and targeting information on top of the video.

In our previous incarnation of Super Mega Microbot, we had a simple UI which accomplished that purpose, although it had some limitations.  Being based on gstreamer, it was difficult to integrate with other software.  Rendering things in a performant manner on top was certainly possible, although it was challenging enough that in the end we did nothing but render text as that didn’t require quite the extremes of hoop jumping.  Unfortunately, that meant things like the targeting reticule and other features were just ASCII art carefully positioned on the screen.

Further, we didn’t just render any video, but that from our custom transport layer.  Unfortunately, it was challenging to get gstreamer to keep rendering frames even when no video was coming in.  That made it impossible to display the other data that was arriving, like robot telemetry.

My new solution is to use ffmpeg to render video to an OpenGL texture, which can then be displayed in the background of the Dear ImGui control application I mentioned previously.  This turned out to be more annoying than I had anticipated, mostly because of my lack of familiarity with recent OpenGL, and the obscureness of the ffmpeg APIs.  However, once working, it is a very pleasant solution.  ffmpeg provides a simple library interface with no inversion of control challenges and it can render nearly anything (and is what gstreamer was using under the hood anyways).

I ended up writing a bunch of simple wrappers and GL and ffmpeg to make it easier to manage:

What I’m planning on using, and what I’ve tested with is just a USB FPV receiver and an off the shelf FPV transmitter.  They are the semi-standard at Mech Warfare events, so at least I’ll succeed or fail with everyone else.  The capture card just presents a 640×480 mjpeg stream at 30fps which ffmpeg has no problem dealing with:

dsc_0426

2020-04-06-133646_1485x724_scrot