While not its primary purpose, I still plan on entering my walking robots in Mech Warfare events when I can. In that competition, pilots operate the robots remotely, using FPV video feeds. I eventually aim to get my inertially stabilized turret working again, and when it is working I would like to be able to overlay the telemetry and targeting information on top of the video.
In our previous incarnation of Super Mega Microbot, we had a simple UI which accomplished that purpose, although it had some limitations. Being based on gstreamer, it was difficult to integrate with other software. Rendering things in a performant manner on top was certainly possible, although it was challenging enough that in the end we did nothing but render text as that didn’t require quite the extremes of hoop jumping. Unfortunately, that meant things like the targeting reticule and other features were just ASCII art carefully positioned on the screen.
Further, we didn’t just render any video, but that from our custom transport layer. Unfortunately, it was challenging to get gstreamer to keep rendering frames even when no video was coming in. That made it impossible to display the other data that was arriving, like robot telemetry.
My new solution is to use ffmpeg to render video to an OpenGL texture, which can then be displayed in the background of the Dear ImGui control application I mentioned previously. This turned out to be more annoying than I had anticipated, mostly because of my lack of familiarity with recent OpenGL, and the obscureness of the ffmpeg APIs. However, once working, it is a very pleasant solution. ffmpeg provides a simple library interface with no inversion of control challenges and it can render nearly anything (and is what gstreamer was using under the hood anyways).
I ended up writing a bunch of simple wrappers and GL and ffmpeg to make it easier to manage:
What I’m planning on using, and what I’ve tested with is just a USB FPV receiver and an off the shelf FPV transmitter. They are the semi-standard at Mech Warfare events, so at least I’ll succeed or fail with everyone else. The capture card just presents a 640×480 mjpeg stream at 30fps which ffmpeg has no problem dealing with:
hello,
are you thinking about using opencv for visual recognition?
very clean work !
LikeLike
I have in the past: https://youtu.be/aeGUYlaC0Sg
I will probably use a different technique this time around, although likely still using some OpenCV primitives.
LikeLike