Tag Archives: video

moteus getting started video 2023 edition!

The perils of technical videos is that they can become out of date pretty quickly. For the moteus getting started guide, it has amazingly held up pretty well, but enough has changed since 2021 that it is time for a new one. Now we can use acceleration and velocity limited commands instead of the deprecated stop position and get nicer 4k b-roll for all the intermissions. Everything in the old video, and in this new one, is applicable to both the moteus-r4.11 and the moteus-n1.

Enjoy!

mjbots November 2020 Update

Here’s the approximately annual giant video update:

If you’re interested in any of the topics in more detail, I’ve collected links to individual posts for each of the referenced items below.

Thanks for all your support in the last year!

Moteus

Announcement of moteus r4.3: Production moteus controllers are here!

Automated programming and test setup: Programming and testing moteus controllers

Dynamometer: Measuring torque ripple, Initial dynamometer assembly

Continuous rotation: Unlimited rotations for moteus

The virtual wall control mode: New “stay within” control mode for moteus

Handling magnetic saturation: Dealing with stator magnetic saturation

moteus r4.5

qdd100

Discussion of the overall design, and details on individual sub-components:

And the pre-production mk2 servos: Pre-production mk2 servos

Accessories

fdcanusb: Introduction and bringing it up

power_dist: The failed r2, the closer to working r3, and the final r3.1

pi3hat: Initial announcement, bringing it up, and measuring its performance

Demonstrations

Ground truth torque testing: Ground truth torque testing for the qdd100

Skyentific’s telepresence clone: qdd100 telepresence demo

kp and kd tuning: Spring and damping constants

quad A1 – Hardware

Lower leg updates:

Chassis: The first introduction, and some minor tweaks

Cable conduit changes: New leg cable management

quad A1 – Software

Cartesian coordinate control: Cartesian leg PD controller

Pronking: Successful pronking!

tplot2 and its sub-pieces:

Simulation: Resurrected quadruped simulator

nrf24l01 transceiver and its sub-components

Smooth leg motion: Improved swing trajectory

Balancing

All four feet off the ground: Higher speed gait formulation, and Stable gait sequencing

Improved stand up sequence: quad A1 stand-up sequence part N

Speed records:

Video and telemetry synchronization (diagnostics part 8)

This is part of a continuing series on updated diagnostic tools for the mjbots quad A1 robot.  Previous editions are in 1, 2, 3, 4, 5, 6, and 7.  Here I’ll be looking at one of the last pieces of the puzzle, synchronizing the video with the rest of the telemetry.

As mentioned previously, recording video of a robot running is an easy, cheap, and fast way to provide ground truth information on all of the sensors and actuators.  However, it is only truly useful if it can be accurately synchronized in time to the other telemetry streams for the robot.

Options

This was part of the puzzle that I spent a long time thinking about before I got started, as there are several possible options that seemed like they could maybe work:

Visual

The concept here would be to put an LED beacon on the robot that is visible from all angles.  It could strobe a synchronizing pattern, like the output from an LFSR which could be identified in the subsequent video frames.

Pros: This should be able to give frame accurate synchronization, and works even for my 1000 fps camera which can’t record audio.

Cons: It is hard to find a good place to mount a light which could be observed from all angles.  The top is the best bet, but I have plans to attach further things there, which would then render synchronization infeasible.

Audio

In this concept, I put a microphone on the robot and have it record audio of the environment during its run.  Then standard audio synchronization algorithms can be used to align the two streams.  I actually included a microphone on the most recent version of the pi3 hat to potentially use this approach.

Pros: This has no visibility requirements, and should be able to give synchronization accuracy well under a single frame of video.

Cons: Getting the microphone data off the pi3 hat was looking to be moderately annoying, as the STM32 which it is connected to is already streaming IMU and RF data back to the robot over its single SPI bus.  When I brought up the board, I verified I could get 1kHz audio off, but that isn’t enough to be useful.

IMU

This was the idea I had last, and what I am using now.  Here, I slap the side of the robot in a semi-random pattern during the video.  That results in an audio signature in the video, as well as lateral accelerometer readings.

Pros: No additional hardware or software is required anywhere on the robot.

Cons: This has worse accuracy than pure audio, as the IMU is only sampled at 400Hz and doesn’t perfectly correspond to the audio found in the video.

Implementation

I took a stab at the IMU version, since it looked to be the easiest and still gave decent performance.  I made up a simple python tool which reads in the robot telemetry data, the audio stream of a video file, and lets the user select rough ranges for the audio and video streams to work from.

It then uses scipy.signal.correlate to do its best job of finding an alignment that best matches both data streams, producing a plot of the alignment.

20200515-video_aligner

As you can see, the audio rings out for some time after the IMU stops its high frequency response, largely due to the mechanical damping of the robot.  However, it is enough for the correlation to work with and give frame accurate results.

3D rendering in tplot (diagnostics part 7)

In previous posts of this series, I covered some diagnostics improvements I’ve made to help work on more advanced gaits for the mjbots quad A1 (1, 2, 3, 4, 5, 6).  This post will cover the last major new piece of diagnostics I added to tplot2, 3d rendering of telemetry data.

3D rendering

While it should be obvious, I’ll give a little exposition.  tplot2 in its state prior to this could show a “tree view” of all data logged in numeric form.  It had a “plot view” which let you plot any single floating point scalar vs time.  As of recently, it could also render video associated with a given point in time in the log.  However, as anyone who has ever tried to debug a 3d dimensional software application, much less a 3d dimensional robot, can attest, debugging with scalar numbers and time plots is only productive for a very limited range of problems.

I’ve been wanting to extend my plotting tools with 3d rendering for some time, and now have gotten around to a minimal first pass.  The logic itself isn’t terribly complicated.  A separate GL Framebuffer object is created in order to render into a texture, then pretty standard GL vertex and fragment shaders are used to render some triangles and lines.  Initially, I’m just doing the robot body, the commanded and actual feet positions, speeds, and forces, and an estimate of the ground underneath them.

20200515-tplot2-3drender

20200515-tplot2-3drender-video

While there is a lot of room for improvement here, both in terms of the visual quality of the existing renderings, and new features that could be rendered, this is already proving itself to be invaluable in diagnosing longstanding problems with the gait motion.

Video in tplot2 (diagnostics part 6)

This is part of a continuing series on diagnostics tooling for the mjbots quad series of robots.  The previous editions can be found at 1, 2, 3, 4, and 5.  Here, I’ll cover the first extension I developed for tplot2 to make it more useful to diagnose dynamic locomotion issues.

Background

Diagnosing problems on robots is hard.  The data rates are high, sensing is imperfect, and there are many state variables to keep track of.  Keeping track of problems that are related to erroneous perception are doubly challenging.  Without a recording of the ground truth of an event, it can be hard to even know if the sensing was off, or if some other aspect was broken.  Fortunately, for things the size and scope of small dynamic quadrupeds, video recording provides a great way to keep a record of the ground truth state of the machine.  Relatively inexpensive equipment can record high resolution images at hundreds of frames a second documenting exactly where all the extremities of the robot were and what it was doing in time.

To take advantage of that, my task here is to get video playback integrated into tplot2, so that the current image from some video can be shown on the screen synchronized with the timeline scrubber.

Making it happen

Here I was able to use large amounts of the code that I developed for the Mech Warfare control application.  I already had the ability to render ffmpeg data to an OpenGL texture.  The missing pieces I needed were getting that texture into an imgui window and adding seek support.

The former was straightforward.  imgui has a Image and ImageButton widgets which allow you to draw an arbitrary OpenGL texture into an imgui widget.

The latter was a little more annoying, only because the ffmpeg API had a slightly unusual behavior.  Even after av_seek_frame was called, one frame from the old point would still be emitted.  This confused my seeking logic, possibly causing it to ignore frames.  However, after discarding that one stale frame, it worked seemingly just fine.

20200515-tplot2-video

Next I’ll cover the last major piece I added to tplot2 to help with issue diagnosis.