Continuing in my series of developments with the Mech Warfare turret, I’ve now managed to replicate the primitive target tracking functionality I had in the v2 version of the turret. This works using a pretty simple principle:
The target closest to the center is deemed, the “active target”
The pitch and yaw rate are set based on a simple P controller to bring that target to a known point
This works passably, as shown in the video below:
In this architecture, I have a lot of room to improve the performance, both in terms of range and tracking stability. We’ll see what I can come up in future work.
The Mech Warfare turret concept I’m developing involves having basically two independent robots, the actual robot and the turret. To make that be controllable in a sane way, the control station will command and receive telemetry from both simultaneously, allowing control actions to be given in the camera frame of reference. Otherwise, remote piloting is… very challenging.
This could be done just by having two separate transmitters. Since the nrfusb that I’m using is spread spectrum, many transmitters can easily co-exist at the same time. However, the protocol is designed such that a single transmitter can simultaneously communicate with multiple slaves at the same time, simply by switching channels more frequently.
I’ve made enough progress with the turret, that I got around to actually implementing this on the nrfusb side. The result is a few new serial commands, “slot tx2” and “slot pri2”, which let you select *which* remote you are talking to, and the ID configuration now supports multiple ids. Unfortunately, this doesn’t lend itself to a good video demo, but the changeset is up:
This post will be short, because it is just re-implementing the functionality I had in my turrets version 1and 2, but this time using the raspberry pi as the master controller and two moteus controllers on each gimbal axis.
I have the raspberry pi running the primary control loop at 400Hz. At each time step it reads the IMU from the pi3 hat, and reads the current state of each servo (although it doesn’t actually use the servo state at the moment). It then runs a simple PID control loop on each axis, aiming to achieve a desired position and rate, which results in a torque command that is sent to each servo. Here’s the video proof!
To date, I’ve used the moteus controllers exclusively for joints in dynamic quadrupedal robots. However, they are a relatively general purpose controller when you need something that is compact with an integrated magnetic encoder. For the v3 of my Mech Warfare turret I’m using the moteus controllers in a slightly new configuration, with a gimbal motor, one for each of the pitch and yaw axes.
Gimbal motor theory and current sensing
From an electrical perspective, gimbal motors are not that all that different from regularly wound brushless outrunners. The primary difference being that they are wound with a much higher winding resistance. That enables them to be driven with a much lower current, at the expense of a lower maximum angular velocity. In this case, I’m using the GM3506 from iFlight which has a winding resistance of 6 ohms, that results in working currents being on the order of 2A maximum.
The moteus controllers are designed to drive motors with phase currents in the 20-60A range. To operate in current control mode, they use a current shunt resistor connected to a dedicated amplifier for each phase. The current sensing resistor determines the range of currents that can be accurately sensed. The resistor that the controllers have by default measures 0.5 milliohms, which gives a reasonable tradeoff between accuracy and power dissipation for 60A operation. However, for 2A operation, it results in pretty low resolution current feedback. Thus for this application, I substituted in 5 milliohm current shunts:
Removing the pre-installed shuntsNew shunts installed
Control modes and noisy velocity
The other potential challenge, is that the velocity signal that can be derived from the AS5047 absolute magnetic encoder in the moteus controller is relatively noisy when operated with no gearbox reduction. That limits how much derivative gain can be used in a position control loop. You could get around that by incorporating more plant knowledge in a filter or state observer, but that shouldn’t be necessary here.
Fortunately for this application, I won’t be controlling position based on the absolute magnetic encoder, but instead based on the inertial measurement unit in the primary controller. The absolute magnetic encoder will be used solely for performing the DQ transform to implement torque control, for which the noise in velocity is irrelevant.
Next up is making these controllers actuate the turret.
Another of the tasks I’ve set for myself with regards to future Mech Warfare competitions is redesigning the turret. The previous turret I built had some novel technical features, such as active inertial gimbal stabilization and automatic optical target tracking, however it had some problems too. The biggest one for my purposes now, was that it still used the old RS485 based protocol and not the new CAN-FD based one. Second, the turret had some dynamic stability and rigidity issues. The magazine consisted of an aluminum tube sticking out of the top which made the entire thing very top heavy. The 3d printed fork is the same I one I had made at Shapeways 5 years ago. It is amazingly flexible in the lateral direction, which results in a lot of undesired oscillation if the base platform isn’t perfectly stable. I’ve learned a lot about 3d printing and mechanical design in the meantime (but of course still have a seemingly infinite amount more to learn!) and think I can do better. Finally, cable management between the top and bottom was always challenging. You want to have a large range of motion, but keeping power and data flowing between the two rotating sections was never easy.
The legacy turret
My concept with this redesign is twofold, first make the turret be basically an entirely separate robot with no wires connecting it to the main robot and second, try to use as many of the components from the quad A1 as I could to demonstrate their, well, flexibility. Thus, this turret will have a separate battery, power distribution board, raspberry pi, pi3 hat, and a moteus controller for each axis of motion. These are certainly overkill, but hey, the quad A1 can carry a lot of weight.
The unique bits will be a standalone FPV camera, another camera attached to the raspberry PI for target tracking, a targeting laser, and the AEG mechanism, including a new board to manage the firing and loading functions.
While not its primary purpose, I still plan on entering my walking robots in Mech Warfare events when I can. In that competition, pilots operate the robots remotely, using FPV video feeds. I eventually aim to get my inertially stabilized turret working again, and when it is working I would like to be able to overlay the telemetry and targeting information on top of the video.
In our previous incarnation of Super Mega Microbot, we had a simple UI which accomplished that purpose, although it had some limitations. Being based on gstreamer, it was difficult to integrate with other software. Rendering things in a performant manner on top was certainly possible, although it was challenging enough that in the end we did nothing but render text as that didn’t require quite the extremes of hoop jumping. Unfortunately, that meant things like the targeting reticule and other features were just ASCII art carefully positioned on the screen.
Further, we didn’t just render any video, but that from our custom transport layer. Unfortunately, it was challenging to get gstreamer to keep rendering frames even when no video was coming in. That made it impossible to display the other data that was arriving, like robot telemetry.
My new solution is to use ffmpeg to render video to an OpenGL texture, which can then be displayed in the background of the Dear ImGui control application I mentioned previously. This turned out to be more annoying than I had anticipated, mostly because of my lack of familiarity with recent OpenGL, and the obscureness of the ffmpeg APIs. However, once working, it is a very pleasant solution. ffmpeg provides a simple library interface with no inversion of control challenges and it can render nearly anything (and is what gstreamer was using under the hood anyways).
I ended up writing a bunch of simple wrappers and GL and ffmpeg to make it easier to manage:
What I’m planning on using, and what I’ve tested with is just a USB FPV receiver and an off the shelf FPV transmitter. They are the semi-standard at Mech Warfare events, so at least I’ll succeed or fail with everyone else. The capture card just presents a 640×480 mjpeg stream at 30fps which ffmpeg has no problem dealing with:
Back when I was getting Super Mega Microbot “Junior” ready for Maker Fair Bay Area 2019, I made it minimally self sufficient through a quick hack of adding some PVC pipe that allowed it to manipulate its feet into a known good position while the robot was safely up in the air.
This worked, but had a number of obvious disadvantages. For one, it looked ugly! Second, the machine couldn’t squat down very far before getting stranded on the “resting legs”. I’ve finally gotten around to doing at least a first attempt at something better!
The basic problem is that normal gaits result in having the feet positioned under the robot. Then, if you want to sit down, you can’t do it because your feet are in the way. What I’ve done for now is have the robot take some steps to widen its stance so that the feet are out of the way, then sit down. Standing up is the inverse. First the feet are positioned in the widened stance, lowered until the machine is at the correct height, then some steps are taken to get into the normal walking gait stance.
At this point, even this is somewhat of a hack, because the gait generation code is only minimally modified from what I had back in 2014. When it gets modernized, this standing up procedure will need to be re-implemented, but for now this will let me both lose the ugliness, and experiment with things like bigger jumps.
After a concerted push, I managed to get Super Mega Microbot “Junior” walking, for all of 15 minutes, then packed it up and went off to compete in Maker Faire. Needless to say, with that much testing, I wasn’t expecting stellar results, and I wasn’t disappointed. However, I did learn a lot. Here’s some of the things that went wrong:
Gimbal and Turret EMI
For this new revision of SMMB, I updated the gimbal board to use RS485 and support the 5S system voltage. I tested it some, but apparently not enough. While I observed no problems during Thursday or Friday’s testing at the site, during the first Saturday match, after firing the gun a few times, the gimbal went into a fault state and stopped applying power. The control scheme for SMMB relies on the turret being operational, so this not only made it impossible to aim, but also made it nearly impossible to drive.
I did manage to connect to the turret manually after the match to diagnose the problem, and discovered that the IMU had stopped communicating over I2C. I had some half-baked logic to try and recover from that, but it was broken, and the only effective way to recover was to power cycle the whole unit.
Unfortunately, my matches on Saturday were all close together, so I didn’t have enough time to prepare a fix in between. Thus, each match I got one or two shots off, and then the machine as a whole became effectively inoperable.
Likely, something in the new board, either in the layout or the decoupling capacitors, results in worse electrical noise than the old one when the AEG is fired. This shouldn’t be too hard to resolve, either through tweaking the layout, or perhaps moving the AEG control to an entirely separate board.
Walking and Leg Robustness
When I got the gearbox system walking for the first time, I quickly noticed that one or more of the timing belts connecting the lower legs to their motor had a propensity to skip a tooth. Since there is no position sensing directly on the lower leg, when that occurs the gait logic just has the incorrect position, causing the robot to fall over pretty soon afterwards. I had never observed any tooth skipping in my previous direct drive leg, even when jumping for over an hour. The first difference I thought which might be causing the problem was the lower pulley print, which I had initially done at 0.15mm but in the gearbox revision it was at 0.2mm. So I printed a full set at 0.15mm, and swapped them in. However, that didn’t fix it and I didn’t have any more time for mechanical solutions, so I tried to work around it by tuning the gait to be as gentle as possible.
Unfortunately, I wasn’t really able to come up with a gait that both could effectively move on the foam mat in the arena, and not occasionally result in belt skips. Also, as I went along, the skips got worse and worse. I tried upping the tension on the belt, lowering the tension on the belt, walking with a straighter leg and more bent leg, nothing much made a difference.
Finally, before my third match, I did more examining and realized that the shoulder joint was deforming significantly under the tension of the belt, resulting in the timing belt only contacting maybe half the pulley or less, and the rest dangling off. Also, the pulley was out of alignment, so the belt was probably only effectively making contact in an even smaller patch. Unfortunately, there was very little I could do about that aside from hope for the best. As it turns out, that problem, while limiting the gaits I could use significantly, didn’t result in ending my run.
Shoulder failure
Gearbox Outer Housing Strength
The entire gearbox effort was undertaken somewhat at the last minute, and with little thought to analysis or design for structural integrity. At best, I made a gut check of “that’ll probably work”, and at worst, I gave it no thought at all.
It was an instance of the latter that caused the final and fatal failure in SMMBJ at Maker Faire. In the gearbox chassis design, the lateral servos themselves support the entire weight of the robot. Those gearbox servos transmit the entire load from the front plate of the servo, through the outer housing, then to the back plate, and finally to the chassis itself. The problem in this case is that the outer housing is a 1.5mm thick (or rather thin) PETG shroud printed with layer lines perpendicular to the primary load.
On reflection then it was not too surprising that a 20lb robot walking around was enough to cause a motor’s shroud to separate at the layer lines, which is what ended SMMBJs run. I had a spare motor and could have replaced it, however, it would likely have failed shortly afterwards too, and the shoulder was about to rip itself apart due to the leg tension problems mentioned above. Thus I turned it into a “static display” and switched to a “show and tell” mode for the rest of the event.
Crippled SMMBJ
Award
Despite those problems, the kind organizers at RTeam awarded me the “Most Innovative” award for trying to push the limits!
Fixing the problems
Clearly, all of these issues can be fixed in a variety of ways, both easy and hard. Keep coming to see my attempts!
Well, Mech Warfare at Maker Faire 2019 has come and gone. Maker Faire was a really awe inspiring event, and RTeam did an excellent job organizing the Mech Warfare competition. There were something like 13 teams with moderately functioning mechs who competed across the 3 days.
Super Mega Microbot “Junior”
My entry, Super Mega Microbot Junior, did manage to walk a bit in 3 matches, but had a previously unseen failure in the turret system that rendered it inoperable a short while into each match. At the end of the 3rd match, one of the leg joints sheared off, and some other of the 3D printed parts were about to fail as well, so I declared it unrepairable at that point.
In the arena pre-failure
I’ll write up a more detailed lessons learned and link to the videos of my matches when they get posted. The videos aren’t all that interesting, given that I only scored perhaps 2 hits across all 3 matches. 😉
Alert! I’m at Maker Faire Bay Area all weekend in the Mech Warfare area in Zone 2 (May 17-19, 2019 for you time travelers from the future). Drop by and say hi!
If you were left in suspense last time, yes, the robot can walk! Getting it to do so in a minimal way was relatively painless. What I found, which hadn’t happened in earlier iterations, is that many types of dynamic motions would cause the lower leg belts to jump a tooth. Needless to say, this was nearly universally fatal, as there is no direct position sensing of the lower leg. This robot is heavy enough that my simulacrum 3d-printed timing belt pulleys just don’t cut it.
Well, there wasn’t enough time to actually get better pulleys now, so I just tuned the walking to be slow and gentle enough that nothing went awry. Here’s the first bit of a 13 minute video I took of it walking around and shooting targets.
Now, that that was over with, I had a few minor things to finish up before heading out to Maker Faire. I made some covers for the motors to keep BBs out.
And I made a bracket so that I could attach the front and rear target panels to shoulder joints:
And here’s a glamour shot of the whole thing in fighting form!
Now that it was all ready, time to take it all back apart and pack it for shipping.