Tag Archives: quada1

quad A1 stand-up sequence part N

I’ve worked through a number of different iterations of the stand-up sequence for the quad A1 (2019-05, 2019-09). The version I’ve been using for the last 6 months or so works pretty well, but because it drags the legs along the ground to get them into position, it can have problems when operating on surfaces with a lot of traction, like EVA foam, besides being uselessly noisy.

To make things just a bit more robust, I’ve now tweaked the startup routine so that the shoulders lift legs clear off the ground before positioning the legs, then lowers them back down into place. This makes the stand up routine much more likely to succeed on just about any surface:

Simple remote controller

For some upcoming work, I needed to drive the quad A1 around without being tethered to a computer. To date, my control mechanisms have been:

Note, both of those methods involve being tethered to a computer, which makes it hard to be mobile. As a possibly short term solution to this problem, I went ahead and got a bluetooth “gaming” controller for my phone (non-affiliate amazon link):

It turns out the web interface I updated previously works just fine on the phone, as does the javascript “gamepad” interface. The only glitch was that this particular gamepad requires that you manually sweep the analog sticks through their full range of motion at each power on before they start providing correct values. Until then, they act as digital inputs. That can be a little disconcerting when the robot starts running ahead at full speed!

Updated web UI

I first wrote about the very very simple web UI for the quad A1 some time ago. That UI served its purpose, although it was largely inscrutable to anyone but myself, as the primary data display was a giant wall of unformatted JSON which filled most of the page. It was also pretty easy to accidentally pick the wrong mode, cause the robot to jump instead of walk, or cut power to everything.

In advance of some wider testing, I went ahead and made the first revision to this now. It has the same basic structure as before, in that the robot serves up one each of an html/css/js file, and provides a websocket interface for ongoing command and control. The change here is just that the html and javascript are even minimally usable:

Clearly I’m not going to win any design awards… but since all the styling is done with CSS, it should be not too hard to make something that looks arbitrarily nice without a whole lot of effort. As a bonus, I also added some minimal keyboard bindings for movement, so the joystick isn’t required if you don’t care about precise or aggressive maneuvers.

micro-BOM management

I’ve now built 3 or 4 complete quad A1 style robots depending upon how you look at it. Each was somewhat of a one-off, incrementally modified over time as I discovered failure modes and improved the design. Before starting to serially build quad A1 style robots, I wanted to get a better understanding of how much actually goes into making one. The quad A1 has a fair number of sub-assemblies, custom PCBs, harnesses, and assembly steps that go into its production. During previous builds, I kept running into problems where I would run out of some component, fastener, or raw material unexpectedly, then have to wait for its lead time before I could continue.

I’m currently using a simplified Kanban inventory tracking system where I re-order parts and components when their inventory level reaches a critical threshold. However, given the discontinuous nature of my production, setting those levels is really hard. I didn’t know, for instance, just how many M3x8 bolts were necessary between the chassis, legs, and all the other sub-assemblies, since I had never really built multiple of all of them at one time before.

Incremental improvement

To make my life a little easier, I’ve started on a “micro-BOM” management solution. It currently is just a simple C++ application that reads a tree of JSON5 (mostly so I can use the obvious comment style) files on disk. Each JSON5 file describes a single stock item, what sub-components it consists of, and what it takes to acquire it in terms of cash, assembly time, 3d printer time, ordering information, and assembly procedures.

{
  "name" : "quada1_leg_lower",
  "version" : "20200701.0",
  "uuid" : "ad3f6ce2-aa13-42a2-8a4c-2d4b4e799ab0",
  "components" : [
    { "name" : "quada1_leg_foot", "qty" : 1 },  // the squash ball
  ],
  "resources" : [
    { "mk3s" : 7.5, "petg_black" : 71.54, "hours_tech1" : 0.1 },
  ],
  "procure" : {
    "print" : "20200625-lower_leg_and_bracket.3mf",
  },
}

That tool can then follow the hierarchy to count up how many stock units of each type are necessary for any top level item, and how many “resources” are necessary to make that happen.

Thus after entering in ~85 separate “stock units”, I now know that the current version of a full quad A1 requires approximately 300 hours of MK3s printer time between all of its assemblies and sub-assemblies, 132 heat set inserts, and 310 fasteners of various types (assuming the qdd100s are pre-assembled)!

Next steps

Given that I have relatively deep sub-assembly trees, and that I expect to be keeping some stock of all of them, I want to make this tool inventory-aware. That way it knows I already have N leg sub-assemblies in stock or Y qdd100s already built, and thus can tell me how many more bolts and such are necessary to build a full robot.

I’d also like to be able to export the results from this to replace the human-readable bom.txt entries sprinkled throughout git. That will make them both more accurate, and easier to maintain.

Balancing on estimated terrain

Last time, I described my approach for estimating the terrain under the robot based on the inertial measurement unit and proprioceptive foot feedback. Now, I’ll cover how that is used to balance.

“R” Frame

First, let me explain the “R” or “robot” frame and how it is used. The frames I’ve discussed in this series so far are the “B” frame, which is rigidly attached to the center of the robot body, the “M” frame, which is located at the center of mass and level with the ground, and the “T” frame, which is under the robot and level with the current terrain.

The “R” frame, by contrast, is a purely invented frame that is a rigid transform away from the “B” frame. Its purpose is to allow for (1) the cool looking inverse kinematic demos that everyone seems so fond of, and (2) mostly global transforms, like this implementation of terrain based balancing. All of the gait algorithms operate almost exclusively in the R frame, which means that offsets and rotations applied there will affect the balance of the robot during its normal operation.

Using the R frame to balance

Here the algorithm is relatively straightforward. The center of the T frame is taken, transformed into the M frame and then moved up by the current average leg height. In 2D, that looks like:

Then, the point p_0 as measured in the B frame gives the desired RB transform offset. It is that simple! That formulation keeps the center of mass over the 0, 0 R frame point accounting for offsets in the center of mass and for non-level terrain.

Simulation results

Here’s the robot walking up a relatively steep slope in simulation using the above technique. The purple disc shows the estimated terrain value, while the gray disc shows the gravity normal plane.

Final steps

In the final post for this work, we’ll test it on the real robot!

Estimating terrain slope

Last time I discussed the challenges when operating the mjbots quad A1 on sloped surfaces. While there are a number of possible means of tackling this, the approach I’ve gone with for now is to estimate the slope of the terrain under the robot, and use that to determine how to position the center of mass. Here’ll I’ll cover the estimation part of this solution.

On paper, the quad A1 has plenty of information to estimate the terrain under its feet. Between the IMU with attitude estimator, the proprioceptive feedback from the joints, and the ability to move the feet around, it would be obvious to a human whether the ground under them was sloped or level. The challenge here is to devise an algorithm to do so, despite the noise in the IMU, the fact that the feet are not always on the ground, and that as the robot moves, the terrain under it changes.

Approach

My basic approach can be summarized in the follow flow chart / block diagram:

First, a brief description of the 3 pertinent reference frames:

  • B Frame: The body frame (or B frame), is centered on the robot body, and rigidly fixed to the robot body. The proprioceptive system eventually calculates each of the 4 feet positions in this frame.
  • M Frame: The CoM frame (or M frame), is centered at the robot’s idle center of mass and oriented such that positive Z points along gravity toward the ground with a heading that is arbitrary at start up, but that tracks the robot’s changing heading.
  • T Frame: The terrain frame (or T frame), is referenced to the M frame at the average height of the legs with a slope that aligns with the average slope of the terrain under the robot.

The algorithm works in roughly the following steps:

  1. First, project all the feet positions into the M frame.
  2. For any in-flight legs, reset the Z value to one calculated from the current TM transform and a 0 T frame Z height.
  3. Fit a plane to these “on-ground” M frame points.
  4. Update the slope of the T frame using this plane with an exponential filter along the X and Y axes.

Properties

This algorithm has the benefit that it will converge on the terrain underneath the robot as long as feet touch the ground with regularity, which is a somewhat necessary condition for a robot supported by its legs. The rate at which the estimate converges can be controlled by the filter constant. Selecting that to be the same order as the step frequency does a decent job of rejecting spurious noise while responding in a timely manner to updated terrain.

Next up we’ll see how to use this information to balance, and watch the results in simulation.

Operating on sloped surfaces

Not too long ago, I ran some outdoor experiments, and while piloting the quad A1 around, realized that it wasn’t going to get very far if it was restricted to just flat ground.

Since the control algorithms are completely ignorant of slopes, the center of gravity of the machine can easily get too close to the support polygon when resting, and similarly fails to stay balanced over the support line during the trot gait.

To get started tackling this, I stuck a configurable ramp in the simulator:

And yes, it fails just as much on the ramp as it did in real life.