Tag Archives: opengl

3D rendering in tplot (diagnostics part 7)

In previous posts of this series, I covered some diagnostics improvements I’ve made to help work on more advanced gaits for the mjbots quad A1 (1, 2, 3, 4, 5, 6).  This post will cover the last major new piece of diagnostics I added to tplot2, 3d rendering of telemetry data.

3D rendering

While it should be obvious, I’ll give a little exposition.  tplot2 in its state prior to this could show a “tree view” of all data logged in numeric form.  It had a “plot view” which let you plot any single floating point scalar vs time.  As of recently, it could also render video associated with a given point in time in the log.  However, as anyone who has ever tried to debug a 3d dimensional software application, much less a 3d dimensional robot, can attest, debugging with scalar numbers and time plots is only productive for a very limited range of problems.

I’ve been wanting to extend my plotting tools with 3d rendering for some time, and now have gotten around to a minimal first pass.  The logic itself isn’t terribly complicated.  A separate GL Framebuffer object is created in order to render into a texture, then pretty standard GL vertex and fragment shaders are used to render some triangles and lines.  Initially, I’m just doing the robot body, the commanded and actual feet positions, speeds, and forces, and an estimate of the ground underneath them.

20200515-tplot2-3drender

20200515-tplot2-3drender-video

While there is a lot of room for improvement here, both in terms of the visual quality of the existing renderings, and new features that could be rendered, this is already proving itself to be invaluable in diagnosing longstanding problems with the gait motion.

Video in tplot2 (diagnostics part 6)

This is part of a continuing series on diagnostics tooling for the mjbots quad series of robots.  The previous editions can be found at 1, 2, 3, 4, and 5.  Here, I’ll cover the first extension I developed for tplot2 to make it more useful to diagnose dynamic locomotion issues.

Background

Diagnosing problems on robots is hard.  The data rates are high, sensing is imperfect, and there are many state variables to keep track of.  Keeping track of problems that are related to erroneous perception are doubly challenging.  Without a recording of the ground truth of an event, it can be hard to even know if the sensing was off, or if some other aspect was broken.  Fortunately, for things the size and scope of small dynamic quadrupeds, video recording provides a great way to keep a record of the ground truth state of the machine.  Relatively inexpensive equipment can record high resolution images at hundreds of frames a second documenting exactly where all the extremities of the robot were and what it was doing in time.

To take advantage of that, my task here is to get video playback integrated into tplot2, so that the current image from some video can be shown on the screen synchronized with the timeline scrubber.

Making it happen

Here I was able to use large amounts of the code that I developed for the Mech Warfare control application.  I already had the ability to render ffmpeg data to an OpenGL texture.  The missing pieces I needed were getting that texture into an imgui window and adding seek support.

The former was straightforward.  imgui has a Image and ImageButton widgets which allow you to draw an arbitrary OpenGL texture into an imgui widget.

The latter was a little more annoying, only because the ffmpeg API had a slightly unusual behavior.  Even after av_seek_frame was called, one frame from the old point would still be emitted.  This confused my seeking logic, possibly causing it to ignore frames.  However, after discarding that one stale frame, it worked seemingly just fine.

20200515-tplot2-video

Next I’ll cover the last major piece I added to tplot2 to help with issue diagnosis.

tplot2 (diagnostics part 5)

In previous posts, (1, 2, 3, 4), I covered the updates I made to the underlying serialization and log file format used in mjlib and the quad A1.  This time I’ll talk about the graphical application that uses that data to investigate live operation.

History

You might note the “2” in the name and realize that yes, this is the second incarnation in the mjmech repository, tplot being the initial.  The original tplot.py was a largely a one-day hack job that glued together the python log bindings I had with matplotlib.  It provided a time scrubber, a tree view, and a plot window where any number of things could be plotted against one another.

20200514-tplot

It did have a number of problems:

  • Speed: The original tplot read the entirety of the log into memory before letting you view any of it.  Further, while reading it into memory, it converted everything into python structures.  This took some time for even relatively short logs.
  • Coding efficiency: This might seem paradoxical, but developing GUIs in PySide still takes a decent amount of time, even if you don’t care what they look like at all.  Either you have all the overhead of using Qt Designer and thus have to manage either UI file loading or compiling, or you design the layouts in code and have mysterious layout issues because the exact construction requirements to get valid layouts are very hard to determine without looking at the QT source.  There are so many signals to connect to slots, and so much state to manage, and anything non-trivial requires deriving custom widget classes with many virtual methods to overload.
  • Integration with video: Yes, QT has a video subsystem, but it is intended for live playback, not frame accurate seeking, and also has a lot of overhead to use it effectively.
  • Build footprint: Except for tplot, I have moved the entirety of the code and its transitive dependencies for the quad A1 to be built from source under bazel.  This makes cross compiling easy, and well as making cross platform and cross distribution support relatively painless.  While I have converted some large things to bazel (gstreamer), QT and PySide was a bridge too far.
  • Python support: PySide1 only supports QT 4.  QT5 had no permissive python bindings until very recently, which while they are in Ubuntu 20.04, didn’t make it into 18.04.  That isn’t of course a deal-breaker, just an annoyance.

tplot2

For tplot2, I decided to try my hand using the Dear Imgui library that I used for the Mech Warfare control interface.  It is remarkably concise, very quick to develop for, looks at least “OK”, and has no dependencies other than OpenGL.  Once I had multiple axis support in implot, getting to tplot1 level functionality was remarkably quick, maybe a day of effort in total:

20200514-tplot2

Next

Next up, I’ll cover the improvements that I made to tplot2 that made it worth all the effort.

Overlaying video on telemetry data with ffmpeg and OpenGL

While not its primary purpose, I still plan on entering my walking robots in Mech Warfare events when I can.  In that competition, pilots operate the robots remotely, using FPV video feeds.  I eventually aim to get my inertially stabilized turret working again, and when it is working I would like to be able to overlay the telemetry and targeting information on top of the video.

In our previous incarnation of Super Mega Microbot, we had a simple UI which accomplished that purpose, although it had some limitations.  Being based on gstreamer, it was difficult to integrate with other software.  Rendering things in a performant manner on top was certainly possible, although it was challenging enough that in the end we did nothing but render text as that didn’t require quite the extremes of hoop jumping.  Unfortunately, that meant things like the targeting reticule and other features were just ASCII art carefully positioned on the screen.

Further, we didn’t just render any video, but that from our custom transport layer.  Unfortunately, it was challenging to get gstreamer to keep rendering frames even when no video was coming in.  That made it impossible to display the other data that was arriving, like robot telemetry.

My new solution is to use ffmpeg to render video to an OpenGL texture, which can then be displayed in the background of the Dear ImGui control application I mentioned previously.  This turned out to be more annoying than I had anticipated, mostly because of my lack of familiarity with recent OpenGL, and the obscureness of the ffmpeg APIs.  However, once working, it is a very pleasant solution.  ffmpeg provides a simple library interface with no inversion of control challenges and it can render nearly anything (and is what gstreamer was using under the hood anyways).

I ended up writing a bunch of simple wrappers and GL and ffmpeg to make it easier to manage:

What I’m planning on using, and what I’ve tested with is just a USB FPV receiver and an off the shelf FPV transmitter.  They are the semi-standard at Mech Warfare events, so at least I’ll succeed or fail with everyone else.  The capture card just presents a 640×480 mjpeg stream at 30fps which ffmpeg has no problem dealing with:

dsc_0426

2020-04-06-133646_1485x724_scrot