Earlier in April we took Super Mega Microbot out to California to compete in Mech Warfare during Robogames 2016. Thanks to the R-TEAM organizers who made the event happen this year. We were really excited, and the event as a whole went off really well! There were a lot of functional mechs attending, and many fights that were exciting to watch.
We managed to play 5 official matches, in the double elimination tournament, finishing in 3rd place overall. When it worked, SMMB worked really well. Our first loss was a very close battle, the score keeping system had us winning by 2 and the judges had us losing by 2. (The scoring system wasn’t super reliable, so there were human judges calling hits). Our second loss was caused when the odroid’s USB bus on SMMB stopped working mid-match, causing us to lose camera and wifi.
Since our last matches, we tried to improve a number of things, while some worked, not all of them are entirely successful yet:
Faster walking: The new mammal chassis is about twice as fast as the old lizard one, but we didn’t get much time to make it work really well, so we were still one of the slower mechs at Robogames. Also, the shoulder bracket, even on its second revision, still had several partial failures during matches and will need to be rebuilt in metal to be strong enough.
Stabilized camera: The new gimbal stabilized turret actually worked really well. We were able to reliably hit moving targets from the full length of the arena while in motion. It still has room for improvement, but overall was very reliable.
5GHz Video transport: We updated our video to use a custom protocol over multicast 5GHz wifi, so that we could completely control the amount of link layer retransmissions. When it worked, this worked very well. We were able to get 720p video with 200ms latency, even in the presence of significant interference. However, adding the external 5GHz wifi card to our odroid seems to have made the USB bus overall somewhat unstable, and one of our matches ended prematurely when the entire USB port died, taking our camera and wifi with it.
Thanks to Kevin from R-TEAM, we managed to capture overhead video of all our matches, and have the video as seen on our operator console for each official match as well.
Well, that took longer than I expected! I last showed some progress on a gimbal stabilized turret for Mech Warfare competitions more than six months ago. Due to some unexpected technical difficulties, it took much longer to complete than I had hoped, but the (close to) end result is here!
Here’s a quick feature list:
2 axis control: Yaw and pitch are independently actuated.
Brushless: Each axis is driven by a brushless gimbal motor for high bandwidth no-backlash stabilization.
Absolute encoders: Each axis has an absolute magnetic encoder so that accurate force control can be applied to each gimbal, even at zero speed.
Fire control: High current outputs for driving an AEG motor, an agitator motor, and a low current output for a targetting laser are present.
7v-12V input: Supports 2S-3S lipo power sources.
12V boost: When running from 2S lipo, it can boost the gimbal drive up to 12V for additional stabilization authority.
HerkuleX protocol: The primary control interface uses a native Dongbu HerkuleX protocol; support for other UART based protocols which will work at 3.3V CMOS levels should be easy.
USB debugging support: A USB port is present to return high rate debugging information and allow configuration and diagnostics to be performed.
BMI160: This IMU is used as the primary source of inertial compensation data. The board hardware supports a second IMU, to be placed on the main robot, but the firmware does not yet support that configuration.
This gimbal design contains three custom boards, a breakout board for the BMI160 IMU, a breakout board for the AS5048B magnetic encoder sensor, and the primary board which contains the rest of the logic.
BMI 160 Breakout
The first board is simple; it is a basically just a breakout board for the BMI160 inertial sensor. It provides the BMI160 itself, some decoupling capacitors, and a 0.1 inch 4 pin connector for the I2C bus.
I had these prototypes made at MacroFab which I highly recommend as a great provider of low-cost turnkey PCB assembly.
This, like the BMI160 breakout board, just has decoupling capacitors, the chip itself, and connectors. It additionally has mounting holes designed to fit onto the 3508 gimbal motor. This was printed at OSH Park and hand-assembled.
Gimbal control board
The primary gimbal control board contains most of the system functionality. It is designed to mechanically mount directly above the yaw gimbal motor, as the yaw absolute magnetic encoder is in the center on the underside of the board.
This prototype was also built at MacroFab, who did an excellent job with this much more complex assembly.
The connectors and features are as follows:
Power and Data: A 4 pin JST-XH connector in the upper right brings in power and data from the main robot.
Debug USB: A debugging protocol is available on this micro-USB port.
Camera USB: Two 4 pin JST-PH connectors provide a convenience path for the camera USB. The turret’s camera connects to the top connector, and the main robot connects to the side facing connector.
I2C peripherals: 3, 4 pin JST-ZH connectors have identical pinout and connect to external I2C peripherals. These are used for the primary IMU, the pitch absolute magnetic encoder, and the optional secondary IMU.
Arming switch: This switch is connected directly to the enable pin on the MC33926, and is also connected to an input on the STM32F411.
Programming connector: The 6 pin JST-PH connector has the same pinout as Benjamin Vedder’s VESC board, and can program and debug the STM32F411.
Weapon connector: A 2×4 0.1 inch pin header has power lines for the AEG drive, the agitator drive and the laser. It has an extra row of pins so that a blank can be used for indexing.
Gimbal connectors: 2, 3 pin 0.1 inch connectors power the yaw and pitch gimbal brushless motors.
The firmware was an experiment in writing modern C++11 code for the bare-metal STM32 platform. Each module interacts with others through std::function like callbacks, and the entire system is compiled both for the target, and the host so that unit tests are run. Dynamic memory allocation is this close to being disabled, but it was necessary for newlib’s floating point number formatting routines, which just allocate a chunk of memory the first time you use them. Otherwise, there is no dynamic memory used at all.
It relies on a CubeMX project template for this board. Most of the libraries CubeMX provides have too little flexilibity to be used for this application, so much of the bit twiddling is re-implemented in the gimbal firmware. CubeMX is great for configuring the clock tree and pin alternate functions however, especially in a complex project like this.
Both configuration and telemetry rely on a templated C++ visitor pattern to perform compile time reflection over mostly arbitrary C++ structures. Any module can register a structure to be used for persistent configuration. Those structures can be changed through the debugging protocol, and can be written to and read from flash at runtime. Each module can also register as many telemetry structures as necessary. These can be emitted over the debugging protocol either at fixed intervals, or whenever they update.
There are three possible modes, the first of which is what I call “open-loop”, and is based on the same principles as the BruGi brushless gimbal, where no absolute motor feedback is available. In that mode, a PID controller operates with the axis error as the input, and the output is the actual phase position of the BLDC controller. In this mode, the integral term does most of the work in stabilization, so the overall performance isn’t great.
The second mode still uses a PID controller, but now the output is an offset to the BLDC phase necessary to hold the current position as measured by the absolute encoders. This effectively makes the output a direct mapping to force applied to the motor, although of course a non-linear mapping. This mode results in much better overall performance and is easier to tune.
Finally, there is a third debugging mode that lets you just hard command specific BLDC phases. This is useful for calibrating the mapping between BLDC phase and absolute encoder phase.
The debugging protocol is partially human readable, but telemetry data is encoded in the same binary format as used elsewhere in the mjmech codebase. tview is the debugging application we use to read that data, as well as configure and control the overall system.
The bottom pane just has a serial console, where you can send arbitrary things over the virtual serial port. tview directly supports relatively few commands from the debugging protocol, and for instance has no UI to operate the stabilizer or fire control, so for now these are done by hand in that window.
The left pane has two tabs, one with a configuration tree and the other with a telemetry tree. The configuration tree shows all structures which were registered as configurable, and allows you to change them in the live system. The telemetry tree shows all structures registered as telemetry structures, and reports their values live as the system is operating.
The right pane has a live plot window where any of the values in the telemetry tree can be plotted versus time. It is just an embedded matplotlib plot, so all the normal plot interaction tools are available, plus some from mjmech’s tplot, like the ability to pan and zoom the left and right axes independently.
And last but not least, here is a short video demonstrating the turret stabilizing a camera and firing some blanks at a target as our mech walks around.
I have some incremental progress to report on various parts of Super Mega Microbot. First, I have a draft fully assembled leg for a mammal walking configuration. It is mostly just the stock Dongbu brackets, with a custom Shapeways print at the final joint holding a standoff and rubber stopper.
Second, I’ve been working on a gimbal stabilized turret. I have video from a prior incarnation below:
And a first draft of a 3D printed turret bracket that permits a full range of motion of the turret:
We just concluded the first ever Boston Mech Warfare Expo this weekend, held at Jaybridge Robotics in Cambridge. We had a lot of exciting mech fighting action, learned a lot, and even had a fair amount of exciting repairs! There are many hours of video I need to sort through, but in the meantime, here are some photos:
To act as a forcing function in our development of Super Mega Microbot, we put together a Mech Warfare style qualification video, showing it driving around for more than 5 minutes shooting at some targets. Here it is!
This video shows some gaits for our Mech Warfare entrant, tentatively named “Super Mega Microbot”. First there is a slow statically stable gait, then a faster two leg at a time gait.
Technically, it has been walking for a couple of weeks, but this is the first time with the actual aluminum machined chassis plates, so that it makes for a nice video. All the previous testing has been done with some polycarbonate plates that I cut out by hand. We haven’t mounted the onboard computer or turret yet, but that is coming soon. In their place is an extra lipo battery to make the center of gravity more reasonable. This gait was driven by a PC over the serial cable seen in the frame.
Mikhail and I are considering fielding an entry into Mech Warfare this season. To evaluate different geometries and servo models, I tried setting up a simple simulation environment which would let us experiment without having to have a large variety of hardware on hand. While we’re not finished, I have a minimal first proof of concept working now, which I’ll describe briefly.
The simulator I’m using is Gazebo. It integrates several different rigid body physics engines, a 3D visualization environment, and a relatively simple file format for describing the configuration of robots. It uses a client server publish-subscribe model, where a central server maintains the physics simulation and any number of clients can connect to control or monitor individual models.
For gait generation, I’m starting with PyPose, which is a pose sequencer and inverse kinematics engine for the arbotiX controller, an open source Arduino compatible controller for Dynamixel servos. Specifically, the NUKE, or Nearly Universal Kinematics Engine, contains routines for generating a couple of different gait patterns for walking robots with lizard style legs.
The nominal workflow I wanted was to operate PyPose on a synthetic robot, simulated by Gazebo. I had to add a couple of pieces of software to make that happen.
To start, Gazebo doesn’t really have a documented protocol for interacting with its publish subscribe network. The primary way clients use it currently is through ROS. To make this work, I wrote up a simple python client library which implements the publish subscribe protocol using eventlet. This allows python applications to both subscribe to topics, as well as publish them.
Next, PyPose is very specialized to the Dynamixel servos and the arbotiX controller. Additionaly, from a gait generation perspective, it is effectively composed of effectively two independent parts. The first is an ahead of time pose sequencer and configuration tool written in python. The second, is a generated Arduino sketch which implements the actual gait when run on an AVR controller. The former, I hacked in a simple abstraction layer and connected it to the python Gazebo library. For the latter, I actually ported the generated C code back into python in order to test the generated gaits in simulation.
At the moment, I have only a rough proof of concept… the code is hacky, I haven’t yet simulated the physical characteristics of any particular servo, the physics model doesn’t seem quite right yet, and the Gazebo model consists of nothing but jointed rectangles with no textures. Despite that, in the video below, you can still see both PyPose manipulating the model, and the python gait generator operating it.
Next up, I’ll be trying to polish off the rough edges, and then try to evaluate the robot configuration variants I actually wanted to validate.