Tag Archives: walk

Balancing on estimated terrain

Last time, I described my approach for estimating the terrain under the robot based on the inertial measurement unit and proprioceptive foot feedback. Now, I’ll cover how that is used to balance.

“R” Frame

First, let me explain the “R” or “robot” frame and how it is used. The frames I’ve discussed in this series so far are the “B” frame, which is rigidly attached to the center of the robot body, the “M” frame, which is located at the center of mass and level with the ground, and the “T” frame, which is under the robot and level with the current terrain.

The “R” frame, by contrast, is a purely invented frame that is a rigid transform away from the “B” frame. Its purpose is to allow for (1) the cool looking inverse kinematic demos that everyone seems so fond of, and (2) mostly global transforms, like this implementation of terrain based balancing. All of the gait algorithms operate almost exclusively in the R frame, which means that offsets and rotations applied there will affect the balance of the robot during its normal operation.

Using the R frame to balance

Here the algorithm is relatively straightforward. The center of the T frame is taken, transformed into the M frame and then moved up by the current average leg height. In 2D, that looks like:

Then, the point p_0 as measured in the B frame gives the desired RB transform offset. It is that simple! That formulation keeps the center of mass over the 0, 0 R frame point accounting for offsets in the center of mass and for non-level terrain.

Simulation results

Here’s the robot walking up a relatively steep slope in simulation using the above technique. The purple disc shows the estimated terrain value, while the gray disc shows the gravity normal plane.

Final steps

In the final post for this work, we’ll test it on the real robot!

Estimating terrain slope

Last time I discussed the challenges when operating the mjbots quad A1 on sloped surfaces. While there are a number of possible means of tackling this, the approach I’ve gone with for now is to estimate the slope of the terrain under the robot, and use that to determine how to position the center of mass. Here’ll I’ll cover the estimation part of this solution.

On paper, the quad A1 has plenty of information to estimate the terrain under its feet. Between the IMU with attitude estimator, the proprioceptive feedback from the joints, and the ability to move the feet around, it would be obvious to a human whether the ground under them was sloped or level. The challenge here is to devise an algorithm to do so, despite the noise in the IMU, the fact that the feet are not always on the ground, and that as the robot moves, the terrain under it changes.

Approach

My basic approach can be summarized in the follow flow chart / block diagram:

First, a brief description of the 3 pertinent reference frames:

  • B Frame: The body frame (or B frame), is centered on the robot body, and rigidly fixed to the robot body. The proprioceptive system eventually calculates each of the 4 feet positions in this frame.
  • M Frame: The CoM frame (or M frame), is centered at the robot’s idle center of mass and oriented such that positive Z points along gravity toward the ground with a heading that is arbitrary at start up, but that tracks the robot’s changing heading.
  • T Frame: The terrain frame (or T frame), is referenced to the M frame at the average height of the legs with a slope that aligns with the average slope of the terrain under the robot.

The algorithm works in roughly the following steps:

  1. First, project all the feet positions into the M frame.
  2. For any in-flight legs, reset the Z value to one calculated from the current TM transform and a 0 T frame Z height.
  3. Fit a plane to these “on-ground” M frame points.
  4. Update the slope of the T frame using this plane with an exponential filter along the X and Y axes.

Properties

This algorithm has the benefit that it will converge on the terrain underneath the robot as long as feet touch the ground with regularity, which is a somewhat necessary condition for a robot supported by its legs. The rate at which the estimate converges can be controlled by the filter constant. Selecting that to be the same order as the step frequency does a decent job of rejecting spurious noise while responding in a timely manner to updated terrain.

Next up we’ll see how to use this information to balance, and watch the results in simulation.

Operating on sloped surfaces

Not too long ago, I ran some outdoor experiments, and while piloting the quad A1 around, realized that it wasn’t going to get very far if it was restricted to just flat ground.

Since the control algorithms are completely ignorant of slopes, the center of gravity of the machine can easily get too close to the support polygon when resting, and similarly fails to stay balanced over the support line during the trot gait.

To get started tackling this, I stuck a configurable ramp in the simulator:

And yes, it fails just as much on the ramp as it did in real life.

Balancing gait in 2D

After getting a gait which looked like it could balance across the leg support line in 1D, I needed to extend that to 2D and try it out on the robot.

Extension to 2D

Extending this to two dimensions wasn’t too bad. I just did a bunch of geometry to follow the path traced out by a given 2 dimensional velocity and rotation rate, intersected with a line segment:

Given this function, the logic to select a swing target is basically the same as in the 1 dimensional case. We now create two “virtual legs”, which consist of two feet ganged together and produce a single support line. At each time instant when all legs are in stance, we look at the time remaining until each of the virtual legs would cross the center of mass at the current velocity. As soon as one hits the half-swing point, we start a swing.

As part of this, I extended tplot2 to be able to render the target location of each swing.

Remaining niggles

Once I had an actual robot to test it out on, I found a few other minor problems. When selecting the target for a foot swing, the 1 dimensional case just used the current velocity to move the leg far enough past the pickup point. That doesn’t work out all that well in heavy acceleration though, resulting in very short stance times. What worked much better was to use the expected velocity at the end of the next stance window. That provided much more consistent stance spacings even when accelerating or decelerating.

I still haven’t come up with a great solution for turning in place, for which the entire concept of estimating distance to the balance point doesn’t make sense. Right now, I just rely on the maximum stance time to ensure the legs eventually step in that scenario and don’t extend beyond their physical limits. I’ll probably eventually add a “time to infeasible geometry” criteria to handle that.

Testing it out

While there is still a lot of remaining work, this increased the maximum possible speed of the machine by at least a factor of 2, and the feasible acceleration by a factor of 10 or so. In the video below, I’ve got some clips of the robot walking around at 0.5m/s. That’s not too fast, but is more than a body length per second which counts for something.

Primitive gait balancing – 1D

I’m working to improve the walking gait of the mjbots quad A1. In this iteration, I wanted to tackle an incremental step towards a more fully dynamic gait, but one that will still greatly increase the capability of the machine. As mentioned last time, the current walking gait cycles between all four legs, and then alternative opposing corner legs in order to move laterally. I’d like to keep that same basic structure, but be a bit smarter about what happens during the swing phase.

Setup

First, the obvious: When the robot has legs on the ground, they support it. If all four legs are on the ground, they form a support polygon. If the center of mass of the robot is within this polygon when projected onto the ground, the robot will happily sit there forever without tipping over.

During the flight phase of the gait, only two legs are in contact with the ground. Now, we no longer have a support polygon, but a support line. The center of mass of the robot is either on one side of it or on the other. Well, I suppose it could be exactly over, but then the slightest breeze would push it to one side or the other. Whichever side the robot is on, gravity will exert a torque and cause it to continue to tip over in that direction.

The current gait sequencer is oblivious to this fact. It will happily pick up the legs when the robot is far away from the support polygon, and then move the center of mass even further away. Needless to say, this results in the robot tipping over rather rapidly.

The “Idea”

Clearly the robot can’t move while keeping itself exactly over the support line, but can we arrange the gait such that it spends equal amounts of time on both sides (to be more precise, the integral of the distance from the support line over the time of the swing cycle is 0)? If we could, that would result in the minimum possible angular rotation during the swing phase, which would hopefully make things be a lot better without a whole lot more work.

In this post, I’ll consider the problem in 1 dimension, along a line as a simplification. Given that the robot can only move in one direction at a time, we should be able to generalize the 1 dimensional case to the 2 dimensional case later.

For a constant velocity and a fixed swing time, the answer is pretty clearly yes. If we start the swing phase right when the center of mass is 1/2 of the swing time from the opposing support, then the swing will be completed when it is on the other side spending exactly the same amount of time on each side of the support line.

In fact, this also holds for a more general case, as long as we don’t attempt to accelerate during the swing phase itself. All that is necessary is to start the swing when the center of mass is half a swing times away from the support point at the current velocity and not accelerate until the foot is down again.

Granted, in addition to the one dimensional simplification, in this model the feet are allowed to be infinitely far away from the center of mass. However, if we just apply a maximum time spent with all legs in stance, that places a limit on how far the legs can get away with only a slight decrease in performance.

The model also breaks down when the deceleration causes the robot to change directions. A simple heuristic is to change the leg order in that case.

Summary

Well, that looks pretty good in 1 dimension. Next up I’ll extend it to 2 dimensions, and try it out on the robot.

First steps towards more dynamic gaits

Now I’ve got a machine, the mjbots quad A1, which is capable of dynamic motions, but the only gait which takes advantage of these capabilities is the pronking one. That gait has the benefit that the dynamics are very simple. The entire time that that robot is in contact with the ground, it is in contact with all 4 legs, so in that regime it is fully controllable. Since it is fully controllable up to the point of lift-off, we can ensure that there is basically zero rotational rate while the machine is mid-flight, which means that it lands with all four legs largely at the same time. Of course, pronking isn’t a very fast or efficient way of getting anywhere, so I wanted to make the first steps… I guess pun intended, towards improving the more general walking algorithm to make the machine move faster in a more robust manner.

My previous solution

As described here, the gait I was using previously is conceptually very similar to a static IK based gait. The sequencing works by picking up and moving opposing pairs of legs in an alternating fashion. As opposed to a purely IK based solution, the inverse dynamics are accounted for. During the motion, each servo is commanded with appropriate velocities to achieve the end foot velocity profile and appropriate forces are commanded to result in a ground reaction force that matches the mass of the robot.

However, there are many things this doesn’t take into account. Among them are the linear and rotational inertia of the overall machine, the torque that the gravity vector exerts on the center of mass, and the dynamics of the legs themselves, which are assumed massless. It also only uses the high rate feedback from the servos in a very limited way, only to apply a 3D force and velocity command. Thus it completely ignores if a leg strikes the ground early either because of an obstacle or because the machine tipped, or if it strikes the ground late/never because of a depression or hole. Because of all this, it also requires periods where all four legs are on the ground simultaneously in order to maintain stability.

In this simple gait on flat ground, most problems manifest as the robot tipping during the flight phase, resulting in one flight leg striking early, which then results in a high angular rate correction as the controller tries to jam it back to the desired position. This can end up with uncontrolled oscillation of the robot in a wide range of hard to define operating conditions. Note, this is roughly the same problems that most IK based gait engines have, such as on the original Super Mega Microbot.

You can make this gait work passably if you carefully tune things and stick within a limited acceleration and terrain operational envelope, but overall it is rather fragile.

Plan

I’m not planning on tackling all of these problems in one go, or just out and out copying another solution, but intend to take small documentable steps towards a more capable machine. Up next I’ll describe my first baby steps towards this, where I look to manage the gravity torque during the flight phase to keep the machine stable even under acceleration.

Cartesian leg PD controller

As I am working to improve the gaits of the mjbots quad A1, one aspect I’ve wanted to tackle for a long time is improving the compliance characteristics of the whole robot. Here’s a small step in that direction.

Existing compliance strategy

The quad A1 uses qdd100 servos for each of its joints. The “qdd” in qdd100 stands for “quasi direct drive”. In a quasi direct drive actuator, a low gearing ratio is used, typically less than 10 to 1, which minimizes the amount of backlash and reflected inertia as observed at the output. Then, high rate electronic control of torque in the servo based on current and position feedback allows for dynamic manipulation of the spring and dampening of the resulting system.

Another option is a series elastic actuator, which uses a traditional high gear reduction servo with a mechanical spring or elastic mechanism inline with the load. Sometimes a separate motorized actuation mechanism can be used to vary the damping properties of the elastic element. This is in principle similar to the quasi direct drive approach, but suffers from a limited overall control bandwidth. Despite being “springy”, QDD servos are still able to have a very high effective mechanical control bandwidth, on the order of hundreds of hertz.

For the quad A1 to date, the compliance it exhibits is largely due to the qdd100’s internal control algorithms, and to a very minor extent, flexing in the mechanical structures of the quad A1 itself. This does work, and gives decent results.

Limitations

The biggest limitation of solely using this approach, is that since the compliance is performed at the joint level, it has no knowledge of the current 3d configuration of the leg. The resulting compliance in 3D space is highly non-linear and depends upon where in configuration space the leg is at that point in time. For instance, if the back legs are configured to have the knee very bent, but the front legs are not, then the back knee needs a much larger restorative torque per unit rotation to have the same linear restorative force at the tip of the leg.

That results in artifacts like shown in the video at the bottom. When the robot falls with the legs not in an identical configuration, the robot ends up pitching or rolling depending upon how the compliance interacts with the current leg geometry.

A “fix”

In my original designs for the moteus controller, I had left a high rate “inter-leg” bus option in the design, where each controller could exchange IK information at the full control rate, so that all compliance could be performed in the 3D space, rather than in joint space. However, as the design progressed, and I failed to implement it, I dropped that capability to simplify and reduce costs.

Here, I ended up implementing something purely in software which doesn’t have the same level of performance as that system would have, but also doesn’t require additional dedicated high rate communication transceivers on every servo control board. The 3D PD controller is just run on the raspberry pi at the regular control update rate (400Hz currently). That makes the control flow look like this:

Results

While this solution isn’t perfect, it does give better results in many scenarios. I applied some disturbances to the robot with either solely joint level controllers, or joint plus XYZ controllers. For the two cases, I tried to tune the controllers to a similar level of stiffness and damping to make the comparison as fair as possible. Walking is generally improved as well, even with just a constant compliance throughout the gait cycle.

Improved swing trajectory

Now that I finally have tplot2 working sufficiently to diagnose problems in 3D, it is time to start actually fixing those problems. The first obvious thing I noticed when watching data replay was that the legs scooted around a lot after making contact with the ground. Absent 3D visualization, I knew something was wrong, but couldn’t easily tell what.

Diagnosing the first problem

Once I was able to plot the commanded position and velocity trajectory, I could clearly see a number of problems. For one, the trajectory was not terribly achievable. The velocity jumped in a discontinuous manner between different phases of the swing cycle, which resulted in large tracking errors when moving the physical legs:

Also, there are those odd periods near the downturn where the commanded Z velocity goes to exactly zero for a while, then resumes its downward trend in a non-physical manner.

When I first wrote the simple walk cycle, I didn’t spend a whole lot (well almost zero) time debugging it, as I didn’t have appropriate debugging tools. Clearly it wasn’t working and something better needed to be done.

Updated swing trajectory

While not the entirety of the problem by any stretch, I figured fixing the swing trajectory was a fine first step that would be mostly independent of any other resolutions. I wanted the swing phase of the leg movement to have a few properties:

  • Continuous velocity profile (I don’t care about jerk)
  • When lifting off and touching down, maintain the ground velocity for a brief period of time
  • For now, I’m not doing whole body control, so the trajectory can be scripted, and it is acceptable to lock in the target position at foot liftoff time

I decided to tackle the problem independently in the Z axis and in the XY plane. In both cases, the approach is based on piecewise cubic bezier curves. In one dimension, these curves have a continuous first and second derivative, but only the position and first derivative are controllable.

For the equation:

x=t^3 + 3 * (t^2 * (1 - t))

The position, velocity, and acceleration are as follows:

Z axis

To generate the Z trajectory, we’ll just stick two of these back to back in a mirrored fashion, so the Z height raises to a peak at the halfway point, then lowers back to the original value with a continuous velocity reaching exactly 0 velocity at the touch down point. That makes the overall Z trajectory look like:

X-Y Plane

In the X-Y plane, I broke up the swing into 3 piecewise sections. The first is a constant deceleration profile from the initial velocity to 0, and the last section is a constant acceleration profile from 0 to the target velocity. The middle section is just a single cubic bezier curve independently applied in the X and Y axes. A sample trajectory (with velocities shown as vectors), might look like:

Then to put the Z and X/Y pieces together, here’s a plot in the XZ plane of a similar system:

So yes, it seems to be doing what we want in that the velocity is continuous in all 3 axes — we lift off gradually, perform our swing, then set back down gradually.

Testing on the robot

Well, I actually tested it first in simulation, but where’s the excitement in that! Here’s what the tplot2 video looks like with the new leg trajectory in a slightly stuttery GIF:

The green and blue feet in the 3D view show that the legs track the control points well, and that 2D plots shows that yes, the Z position and velocity are smooth and continuous as we desired.

Resurrected quadruped simulator

Thankfully, I’m now at the point where I’m fixing actual dynamics problems on the robot.  Doubly thankfully I have a robot which is pretty robust and keeps working!  That said, it is still, shall we say, “non-ideal”, to be testing code for the first time ever on a real robot.

Back with my HerkuleX based Super Mega MicroBot, I had a working DART based simulation which was decently accurate.  However, the actuators for that machine were so limited that it didn’t really make sense to do any work in simulation.  The only way to be effective with that machine was to tweak and tweak on the real platform and rely on exactly the right amount of bouncing and wiggling that would get it moving smoothly.

Now that I can accurately control force at 400Hz and beyond, that isn’t a problem anymore, so I’m working to resurrect the bitrotted simulator.  In the end though, it turned out to be a complete re-write as basically nothing of the original made sense to use.

Here’s a video of the very first time it moved around in sim (which means there are still many problems left!)

Simple walking gait on the quad A1

After I restructured my control laws to take advantage of high rate force feedback for the pronking experiments, I haven’t actually managed to port the walking gait yet.  Now that I have a brand new robot, it seemed like a good time!

This gait is basically the same thing as I ran on the quad A0 in principle.  The opposing feet are picked up according to a rigid schedule, and moved to a point opposite their “idle” position based on the current movement speed.  Any feet that are completely placed on the ground just move with the inverse of the robot’s velocity.

What differs now is that the leg positions and forces are controlled in 3D at a high rate, 400Hz for now.  At each time step, the position and velocity of all 12 joints is measured.  The gait algorithm calculates a desired 3D position, velocity, and force.  Feedforward force is currently only used to control the weight supporting legs.  Then, those 3D parameters are transformed into a joint position, velocity, and force based on the current joint position, and the command is sent out.

While not conceptually too different, just controlling the system in 3D at a high rate gives significantly improved results for a range of walking parameters.  There is still a lot left to do, but it is a good start!

(also, a limited number of the qdd100 servos are now open for sale to beta testers at mjbots.comhttps://mjbots.com/products/qdd100-beta-developer-kit/)