Archives: Robots

LCD Enclosure for Savage Solder

One of the minor improvements we had planned for Savage Solder, our robomagellan entry, was to build a new enclosure for our LCD. I used this as an opportunity to experiment with some new manufacturing techniques in order to make something that was both more customized and still looked relatively professional.

In our previous incarnation of Savage Solder, our laptop was generally left closed while the car was in motion to save wear and tear on its hinges. During those periods, the LCD provided information on which stage of the plan the software was in, whether it was homing on a cone, and other auxiliary pieces of status. Watching the display as the car was moving added a lot of value over just looking at the results after the fact, as we had a lot more context.

FeeCAD Model of Enclosure

FeeCAD Model of Enclosure

My plan was to build an enclosure box using custom machined faces of ABS plastic, similar to Gary Chemelec’s approach documented at http://chemelec.com/Projects/Boxes/Boxes.htm. Basically, you cut sheets of ABS plastic into the correct dimensions for each edge, machine any holes for panel mounts, and then weld the result together with a plastic solvent.

Since I have been experimenting with some 3D printing, I drew up a 3D solid model of the display and an enclosure in FreeCAD. This gave me some confidence that I would be able to get the mounting hardware lined up properly with the display, and that the box dimensions would be big enough to encompass all the resulting hardware. In addition to the raw ABS box, a sheet of clear polycarbonate is inserted in between the LCD and the ABS cutout to better protect the LCD.

FreeCAD Model of Button

FreeCAD Model of Button

With that model, I ordered some ABS sheet from amazon, and went down to the Artisan’s Asylum to cut each side to the correct dimensions and machine the various holes. I used a jump shear to get each piece to roughly the correct dimension, then the Bridgeport mill with an Onsrud end mill bit to get each side square and to as close to the correct dimensions as I could. This portion of the process didn’t go as smoothly as I would have liked as I broke two mill bits making novice mistakes on the milling machine. I had planned on milling out the large LCD face using the mill as well, but instead of ordering a third mill bit, I just drilled a long series of holes using a standard ABS drill bit and completed the final cut and polish with a Dremel tool.

The welding process went relatively smoothly. I used an ABS glue compound that was a mixture of MEK (methyl ethyl ketone) and acetone and worked one edge at a time. The first edge was clamped to a square aluminum tube for alignment, the others self aligned against the already installed edges.

Unpopulated LCD Control Board

Unpopulated LCD Control Board

For the buttons on the right side of the display, I built a small backing board with a few momentary contact switches, then 3D printed a set of plastic buttons to fit into the front face holes that would be captured between the front face and the momentary contact switch. I used small bits of sandpaper to finish the face holes so that the buttons would have a snug, but freely moving fit.

The LCD itself was a 40x4 character version that Mikhail had previously developed an ATTiny hand built board to convert to RS232. The hand built board wouldn’t fit easily in the enclosure, so I drew up a PCB based on an ATMega32U4 which would go straight to USB, take up a bit less space, and use up some of the old parts I have lying around the lab. There were only two minor problems bringing up that board when it arrived. First, I did not realize that PORTF on the ATMega32U4 comes from the factory reserved for JTAG, where I was expecting it to be immediately available for GPIO. You can select the mode either with a boot fuse, or by configuring appropriate registers in software. Second, the LCD didn’t have any documentation, and at the time I didn’t have the source code to the original controller, so I mostly guessed at the HD44780 pinout, with the assumption that I could change things around in firmware later if things proved wrong. My guesses were actually mostly correct, but I got the ordering of the HD44780’s 4 data pins mixed up, so some bit re-arranging was required in the AVR that I hadn’t intended.

The final enclosure is pictured below, hopefully soon I will have pictures of it mounted on Savage Solder itself!

Final Savage Solder LCD Enclosure

Final Savage Solder LCD Enclosure

Enclosure Interior

Enclosure Interior

Autonomous Racing Rotorcraft: Camera Driver

In the last post on my autonomous racing helicopter I managed to get the OV9650 camera communicating in a most basic way over the I2C bus. In order to actually capture images from it, the next step is to get a linux kernel driver going which can connect up the DM3730’s ISP (Image Signal Processor) hardware and route the resulting data into user space.

After some preliminary investigation, I found the first major problem. In the linux tree as of 3.6, there is a convenient abstraction in the video4linux2 (V4L2) universe for system on chips where the video device as a whole can represent the entire video capture chain, but each new “sensor” only needs a subdevice created for it. And there are in fact a lot of sensors already existing in the mainline kernel tree, including one for the very related OV9640 camera sensor. The downside however, is that there are two competing frameworks that sensor drivers can be written to, the “soc-camera” framework and the raw video4linux2 subdevice API. Sensors written for these two frameworks are incompatible, and from what I’ve seen, each platform supports only one of the frameworks. Of course, the omap3 platform which is used in the DM3730 only supports the video4linux2 API whereas all the similar sensors are written for the “soc-camera” framework!

Laurent Pinchart, a V4L2 developer has been working on this effort some, but I had a hard time locating a canonical description of the current state of affairs. Possibly the closest thing to a state of the soc-camera/v4l2 subdev universe can be found in this mailing list post:

Subject: Re: hacking MT9P031 for i.mx
From: Laurent Pinchart <laurent.pinchart@xxxxxxxxxxxxxxxx>
Date: Fri, 12 Oct 2012 15:11:09 +0200

...

soc-camera already uses v4l2_subdev, but requires soc-camera specific support
in the sensor drivers. I've started working on a fix for that some time ago,
some cleanup patches have reached mainline but I haven't been able to complete
the work yet due to lack of time.

--
Regards,

Laurent Pinchart

Since I don’t have a lot of need to upstream this work, I took the easy route and started with an existing v4l2 subdevice sensor, specifically the mt9v034 which is in the ISEE git repository. I copied it and replaced the guts with the ov9640 “soc-camera” driver from the mainline kernel. After a number of iterations I was able to get a driver that compiled, and appeared to operate the chip.

To test, I have been using Laurent Pinchart’s media-ctl and yavta tools. The media controller framework provides a general way to configure the image pipelines on system on chips. With it, you can configure whether the sensor output flows through the preview engine or the scaler and in what order if any. yavta is just a simple command line tool to set and query V4L2 controls and do simple frame capture.

One of the first images with correct coloring is below. Single frame capture out of the box was recognizable, but the quality was pretty poor. Also, as more frames were captured, the images became more and more washed out, with all the pixel values approaching the same bright gray color. That problem I will tackle in a later post.

First Image from OV9650

First Image from OV9650

Autonomous Racing Rotorcraft: Initial Camera Exploration: I2C Smoke Test

I am now far far down the rabbit hole of trying to validate a camera for the low altitude altimetry of my prototype autonomous racing helicopter. In the last post I got to the point where I could build system images for my IGEP COM Module that included patches on top of the ISEE 3.6 linux kernel. The next step was to use that ability to turn on the clock at the TI DM3730’s external camera port.

First, what is the path by which a normal camera driver turns on the clock on the IGEP? To discover this, I traced backwards from the board expansion file for the CAMR0010 produced by ISEE, just because it was the easiest place with a thread to start grasping. In the board expansion file, “exp-camr0010.c”, a function is defined specifically to configure the ISP’s (Image Signal Processor) clock:

static void mt9v034_set_clock(struct v4l2_subdev *subdev, unsigned int rate)
{
        struct isp_device *isp = v4l2_dev_to_isp_device(subdev->v4l2_dev);

        isp->platform_cb.set_xclk(isp, rate, ISP_XCLK_A);
}

However, in the absence of a full camera driver, it was not entirely clear how to get a hold of a “struct isp_device*” that you could use to configure the clock. To understand more, I traced the many layers this function is passed down through before leaving the board expansion source file:

  • mt9v034_platform_data: This structure was defined by ISEE and is exposed from the new mt9v034 driver.
  • i2c_board_info: The mt9v034_platform_data structure is passed into this one as the “.platform_data” member.
  • isp_subdev_i2c_board_info:The i2c_board_info structure is passed as the “.board_info” member of this structure.
  • isp_v4l2_subdevs_group_camera_subdevs: The board_info structure is passed in here as the “.subdevs” member.
  • isp_platform_data: The camera_subdevs member is passed in here as the “.subdevs” member.
  • omap3_init_camera: Finally, the platform_data structure is passed in here.

Eventually, this clock setting callback is stashed inside the mt9v034 driver where it is invoked in a couple of places. Yikes! I tried to backtrack this route to get an isp_device, but had no luck. What did end up working was grabbing the driver, then device by name: (error checking and the “match any” function omitted for clarity)

struct device_driver* isp_driver;
struct device* isp_device;
struct isp_device* isp;
isp_driver = driver_find("omap3isp", &platform_bus_type);
isp_device = driver_find_device(isp_driver, NULL, NULL, match_any);
isp = dev_get_drvdata(isp_device)

Then, I exposed this functionality through a simple debugfs entry that appears in /sys/kernel/debug/arr_debug/xclka_freq (when debugfs is mounted of course). Then I was able to write frequencies from the command line and get the external clock to run at any frequency I chose. Yay!

There was one final piece to the puzzle before I could claim the camera was functional. The OV9650, while electrically compatible with I2C, is not an SMBus device. The standard linux command line tools, i2cget and friends were not able to drive the camera in a useful way. To get over the final hurdle, I wrote a simple user-space C program which opens “/dev/i2c-3”, sets the slave address using the I2C_SLAVE ioctl, and then uses the bare “read” and “write” API to send and receive bytes of data. With this, I was able to extract the product identifier from the chip of 0x9652! I guess it is likely a subsequent revision of the 9650.

Autonomous Racing Rotorcraft: Initial Camera Exploration: System Image

In my previous post I described how I wired up an OV9650 based camera module to my IGEP COM Module. The ultimate goal of course is a low cost reliable altimeter for my autonomous racing helicopter. When we left off, I had the camera breadboarded up connected to the IGEP COM Module and wanted to drive it over the I2C interface to verify it was working. While the IGEP COM Module’s camera I2C interface is easily exposed to the linux user space, the OV9650 doesn’t respond to I2C commands without a clock being present. Turning on the clock turned out to be quite a challenge.

First I explored the possibility of existing user space tools which might be able to poke directly into /dev/mem and activate the clock. Promisingly, I found omapconf, a user space tool which claimed to provide complete control over the clock systems of TI’s OMAP chips. Unfortunately, it only operated on OMAP4 and the unreleased OMAP5, not the OMAP3 that is in the IGEP COM Module. The source code wasn’t particularly helpful for quick inspection either. There is a lot of complexity in manipulating the processor’s clock registers, and implementing it from the datasheet didn’t seem like a particularly fruitful use of time. Thus discouraged, I moved on to the next possibility, kernel mode operations.

The linux kernel for the IGEP COM Module already had routines designed to manipulate the clock registers, so why not use those? Well, to start with, building a custom kernel (and suitable system image), for this board was hardly well documented. After much consternation, and then much patience, I ended up piecing together a build setup from various IGEP wiki pages that allowed me to mostly reproduce the image that came on my board. It consisted of the following steps:

  • Yocto: Is a tool used to creating embedded linux distributions. It is based on the OpenEmbeddeded recipe files, and includes reference layers for building a distribution.

    git clone -b denzil git://git.yoctoproject.org/poky
    
  • ISEE Layer: ISEE publishes a yocto layer for their boards which includes recipes for a few packages that don’t yet exist in yocto.

    git clone -b denzil git://git.isee.biz/pub/scm/meta-isee.git
    

Once the relevant git trees were cloned, you still have to generate a build/directory using the “oe-init-build-env” script, and modify files in that directory to configure the build. The How to get the Poky Linux distribution wiki page on the ISEE website had some hints there as well.

With this configured, I was able to get a system image that closely matched the one that came with my board. In addition, I added a local layer to hold my modifications, then proceeded to switch to the as yet unrelease ISEE 3.6 kernel, which has support for the ISEE camera board. My supposition is that the OV9650 camera I’m working with is close enough to the mt9v034 on the CAM BIRD that I will run into many fewer problems using their kernel. For my local development, I cloned the ISEE kernel repository, and have pointed my local layer’s linux kernel recipe to the local git repository. This allows me to build local images with a custom patched kernel. Now, I am truly able to move on to the next step, of driving the clock!

ARR: Initial Camera Exploration: Wiring

What I expected to be the hardest hardware portion of the ARR has not disappointed – the camera. To recap, I am in the process of designing and building an autonomous helicopter. As part of that, I wanted to use a vision system combined with a laser pointer to provide low cost altimetry during takeoff and landing. The IGEP COM Module has a dedicated camera port with support for both camera and serial interfaces, and there are a large number of cheap camera phone camera modules out there which should be sufficient. How hard can it be?

I started with a TCM8230MD from SparkFun for which I simultaneously ordered a breakout board from some random online retailer. Unfortunately, the breakout board never came. I tried using some 26 guage wire I had around and a magnifying glass to solder up a by-hand breakout board to no avail. Finally, I broke down and found http://sigalrm.blogspot.com/2011/03/tcm8230md-breakout.html who posted eagle files of a breakout board. I dutifully submitted them to batchpcb and waited.

TechToys OV9650 Module

TechToys OV9650 Module

While I was waiting for batchpcb, I simultaneously ordered an OV9650 based board from techtoys, who also sold a breakout (pictured right). While the breakout was on 1mm spacing instead of 0.1in spacing, it was still a lot easier to experiment with. The Hong Kong post has gotten remarkably good, because this development board arrived long before the batchpcb breakout did, so I started integrating it. This board operates at 3.3V, while the TI DM3730 in the IGEP COM Pro uses 1.8V for all of its external IO. To mediate between them for logic level signals, I used a bank of SN74LVC245ANs from digikey, along with 2 discrete N-FETs to shift the I2C bidirectional signals as described in this application note from NXP.

The next challenging step was getting the camera port on the IGEP COM Pro into a usable form. It has a 27 pin 0.3mm FFC style connector, which if you haven’t seen one before is tiny! To make this work, I got a 4 inch passthrough cable from digikey and mating connector, and this 0.3mm FFC breakout from proto-advantage.com. I soldered down the FFC connector to the breakout using my hot air rework station, but should have used an iron with solder-wick technique, as the plastic on the connector melted and bubbled up a bit during soldering. Fortunately, it still worked, so I didn’t need to use my spare.

20121222-arr-ov9650-breadboard.jpg

At this point, I had the camera and IGEP COM Module nominally wired up in a way that should be usable. The final breadboard is pictured above. To make progress from here, I wanted to try a smoke test: communicate with the camera over it’s I2C interface to verify that it is powered and working. However, I quickly discovered that it wouldn’t do anything unless it was supplied with a valid clock. This turned into my next odyssey – trying to enable the clock output on the TI DM3730 in the IGEP COM Module. That is involved enough to be the subject of a later post.

ARR: Platform Controller Testing

This is a followup to my last post discussing the initial platform controller feasibility experiments for the ARR, my autonomous helicopter.

We left off last time with a couple of candidate designs for the platform controller and some rough requirements that it needed to meet. What I wanted to do was devise a test procedure that could ensure that it would meet all the functional, as well as performance requirements, preferably over a long period of time.

Basic Functionality Testing

First, the controller has a moderate number of relatively performance insensitive functionality that could potentially regress. This includes things like reading GPIO inputs, configuring what inputs to stream, and communicating over the I2C bus. For these, I created a pyunit test suite communicating with the device over it’s USB-serial interace with pyserial. The suite setup function forces the platform controller to reset, so each case starts from a known state. From there, I created maybe a dozen tests which exercise each piece of functionality and verify that it works as expected.

Input Servo Pattern Generation

Next, the platform controller needs to be able to sample 8 servo channels with sub microsecond precision. To verify this, I needed to generate 8 channels of data which were both a) easily verifiable as correct, and b) likely to turn up most of the likely causes of performance problems. My eventual solution was to create a very simple AVR program running on a separate microcontroller using delay loops which emits 8 channels of servo data with pseudorandom values using a known linear feedback shift register (LFSR) equation. I used a 16 bit LFSR, emitting only the upper 11 bits as the pulse width. Each channel was exactly one LFSR cycle behind the previous one in time, so that for a given 8 pulse cycle, you will have seen 8 subsequent values of the LFSR. This made it pretty easy for the receiver to figure out what the remaining 5 bits should have been, even if there were errors of several LSB (least significant bit) on several of the channels. For example, a few cycles of this pseudo-random data look like:

#        Channel pulse width in hex.
# Cycle   1   2   3   4   5   6   7   8
   1     370 1B8 0DC 46E 237 11B 48D 246
   2     1B8 0DC 46E 237 11B 48D 246 123
   3     0DC 46E 237 11B 48D 246 123 491

This approach has a couple of benefits:

  • Each 50Hz cycle is independently verifiable. All you need to do is scan through all possible “hidden” 5 bits and see if any result in a close match across all channels. You can also see just how much error there is on each channel.
  • Every channel changes drastically at each cycle. Some problems could be masked by slowly time varying inputs. Since at each cycle, the input changes in a very random way, this isn’t a problem.
  • The emitter can run open loop. No test harness communication is needed with the servo control emitter whatsoever. It just emits the random 8 channels of servo data indefinitely.

I wired up this separate AVR to the platform controller on a breadboard, and first added a few simple tests to the pyunit test suite that checked just a couple cycles to verify basic consistency. Once that was working, an endurance test script streamed the input data (along with a representative sample of other data as well to simulate comparable load to the final system), checking each frame for consistency and the number of LSB errors.

Servo Output Performance

The ARR platform controller has the ability to either pass through servo inputs, or take commands over the USB-serial link. Both of these needed to be verified in a comprehensive way. Fortunately, after the input verification step above, I already had a microcontroller which could sample 8 servo channels with high accuracy and report them back over a serial link! So I solved this by just dropping another ARR platform controller onto my breadboard. The pyunit test suite was extended to communicate with both devices, and monitored the outputs of the device under test using the second system. The final test harness block diagram is below:

20121217-helicopter-platform-test-block.png

Overall System Performance

Using this complete system, the endurance test script verifies not only input performance over time, but output performance over time as well. In one sample run, I got the following results:

 Frames capture: 150,000 (about 50 minutes)
 Input Results:
   Invalid Frames:       0      - 0.00%
   Values w/ 1 LSB off:  5,601  - 0.47%
   Skipped Frames:       0      - 0.00%
 Output Results:
   Invalid Frames:       0      - 0.00%
   Values w/ 1 LSB off:  58,000 - 4.82%
   Skipped Frames:       0      - 0.00%

These results were taken when the platform controller was passing through input values to the output and streaming a full complement of I2C data at 100Hz. They are about as expected, a single servo value results in a 1LSB error about 0.47% of the time. Then, passing that channel through to the output and sampling it a second time results in a 1LSB error about 4.82% of the time. Given that 1LSB of error will not even be noticeable in this application, these results are just fine. Finally, for posterity, I’ve included an annotated picture of the completed test harness below:

20121217-arr-platform-test.jpg

ARR: Platform Controller

The first aspect of my autonomous helicopter project I tackled was to validate the feasibility of an AVR microcontroller based low level platform controller (more specifically, the AT90USB1286). The platform controller needs to provide the real time interface between the radio receiver, the servos, the primary flight computer, as well as a couple of additional sensors. More specifically, the platform controller has a couple of responsibilities and/or requirements:

  • Receive up to 8 channels of servo commands from the radio receiver.
  • Emit up to 8 channels of servo commands to the various servos and motors on the helicopter.
  • Optionally allow the receiver commands to pass through to the output, or alternately put the outputs under computer control.
  • Communicate with a variety of I2C devices, notably the inertial sensors and a barometer.
  • Expose all of these sensors to the flight computer over a full speed USB connection.

A block diagram showing the interconnections in the current design is below

20121209-helicopter-platform-control-block.png

The servo command manipulation is particularly challenging. As a refresher, most RC servos are controlled with a pulse width modulated signal, where a pulse of between 1ms and 2ms is emitted at 50Hz. 1.5ms commands the center position and the other two extremes are at 1ms and 2ms. The Spektrum receiver on my Blade 450 claims to have 2048 bits of resolution. That indicates that each least significant bit (LSB) of command corresponds to (2ms - 1ms) / 2048 = 488ns. Given that most AVRs have a maximum clock rate of 20MHz, that works out to about 9 AVR clock cycles per LSB. In general, sampling an input with that level of accuracy would require a hardware capture register, but no AVRs have 8 separate inputs which can trigger captures.

Input Approach 1: Analog Comparator and Timer1 Capture

I explored two mechanisms for getting around that shortcoming. First, while the AVR does not have 8 separate inputs which can simultaneously be connected to a capture register, it can be configured to allow the analog comparator to trigger captures. Subsequently, the analog comparator can be driven by the analog multiplexer, which on the AT90USB conveniently has 8 inputs. The downside is that only one input can be selected at a time. This isn’t necessarily a showstopper, as most RC receivers actually drive the servo commands one after another in time. Channel 1 is pulsed, then channel 2, channel 3, and so on. Then when the time for another 50Hz cycle begins, the cycle starts afresh.

My experiments with this approach were mixed. The accuracy was very good, since the capture was done by the AVR hardware, I could accurately sample at 20MHz when each pulse started and stopped. There were downsides though. For one, the receiver servo outputs had to be connected in exactly the correct order. If they weren’t, then the wrong pin would be configured when it was being pulsed. While the pin orderings will be fixed in this application, the firmware may be useful in a more general sense later. The need to whip out a scope to find out which order the pulses are emitted in could be a little burdensome. Second, and more critically, all of the processing for this approach was done in a main loop, including switching which input pin was being sampled. This was done to minimize the amount of time with interrupts disabled in order to support accurate servo pulse outputs. Because of this, occasionally the main loop would not complete a full cycle between when pulse N stopped, and pulse N+1 started, and the pulse would be missed, along with the rest of the pulses that cycle. Similarly, it requires that disconnected channels be placed altogether at the end. While these limitations may not be showstoppers, I wanted to see if I could do better.

Input Approach 2: Pin Change Interrupt

The second approach was to sample the servo inputs by placing them on the AVR’s 8 bit pin change interrupt port and configuring that interrupt. Then, the interrupt handler samples the current value of a timer, enables interrupts, then samples the state of the input port. Both values are stored into a ring buffer for consumption by the main loop. This approach requires about 16 instructions to execute in the interrupt handler before interrupts are re-enabled, which could introduce ~1 LSB of jitter into the output pulses if the timing is unfortunate. However, it can handle servos connected in any order and can handle any input being disconnected. This approach seemed relatively promising, my experiments proved it out with roughly the above properties.

Servo Outputs

Next, the AVR needed to generate the 8 channels of output in an accurate way. The technique that ended up working well for me was to, in the main loop, start a servo pulse output and begin a countdown timer in a small critical section protected with interrupts disabled for 3 instructions total. Then, the interrupt handler is hand coded in the minimal amount of assembly to zero out all the outputs on the given port. gcc by default, if you write:

ISR(TIMER3_COMPA_VECT) {
  PORTC = 0x00;
}

Will generate 12 instructions of prologue and code to save the status, r0 and r1, xor r0 with itself to generate a 0, then store r0 to PORTC, and finally restore everything. What I ended up using was:

ISR(TIMER3_COMPA_vect, ISR_NAKED) {
  asm("push r16"     "\n\t"
      "ldi r16, 0"   "\n\t"
      "out %0, r16"  "\n\t"
      "pop r16"      "\n\t"
      "reti"         "\n\t" :: "M" (_SFR_IO_ADDR(PORTC)));
}

Notably, there is an AVR instruction to directly load an immediate into a register without affecting the status register, so it doesn’t need to be saved. Also, only one temporary register is required, whereas gcc prepares two for no good reason. This hand assembly requires a mere 5 instructions, significantly reducing jitter coupling into the input sampling loop.

Next Steps

Next time, I’ll look at how I stress tested this solution to verify that it would actually work over a long period of time and through all the required scenarios.

Savage Solder: Transmission Backlash and Pavement Results

In part 1 and part 2 I looked at our progress in improving the acceleration performance of our Savage Solder autonomous racing car as applied to RoboMagellan type competitions. We left off with a reliable open loop model of acceleration versus current speed and throttle input, which were combined with some feedback elements to make a simple controller. However, upon running the controller the first time, we saw some odd behavior when the throttle transitioned to and from braking. It was difficult to see at our standard sampling rate of 20Hz, so we increased our overall control loop to 50Hz (the rate at which the RC servos are updated) to get a better view. The behavior can be seen on the plot below:

20121125-backlash.png

Around time 15:32:24.9, the throttle command switches from a braking command to a positive acceleration. Then just before 25.0, the velocity starts a rapid ramp up for 4 time samples, starting from about 1.4 m/s and peaking at about 2.0 m/s, a 40% increase. After those 4 samples, it snaps back into place at around the previous value of 1.4 m/s. This type of velocity spike occurs on nearly every transition between accelerating and braking.

What is happening here is caused by the backlash in the car’s transmission. There is a region of about 3-4cm where you can move the drive wheels without turning the motor caused by the aggregated slop in all the individual linkages between the motor and the drive wheel. The car is not actually speeding up or slowing down during this event, but the motor (where our velocity data comes from) does. It speeds up until all the backlash is consumed, at which point it then resumes traveling at the exact same speed as the rest of the car. This was confirmed by looking at auxiliary inertial measurement data which indicated that the car itself did not accelerate or decelerate during these events.

Once suitable filtering was in place to mitigate the effects of backlash, our velocity control is performing admirably on pavement test surfaces. Below is a plot of a demonstration run with a top speed of 15 mph, an acceleration of 3.0 m/s^2, and still decelerating in a controlled manner to bump a cone. With our current tuning, it starts decelerating about 8s before contacting the cone. We were just able to get enough space in our test area to get it to hold 15mph briefly. Note the accurate velocity tracking both during hard acceleration and deceleration, as well as the smooth approach into the final bumping speed.

20121125-savage-fast-plot.png

And finally, video we took of the car when the above data was recorded:

 

Savage Solder: Measuring Acceleration Characteristics

In the previous installment, I looked at the problems we had with Savage Solder’s legacy PI velocity control loop when used for RoboMagellan type competitions. This time, I’ll look at how we measured the characteristics of the car, motor, and motor controller.

We wanted to get a better understanding of what the open loop behavior of the car was at a range of accelerations – and on a variety of surfaces. So, I fired up our primary application, which just happily logs data from all systems when the transmitter is in manual override mode. Then, I had a lot of fun driving around a bunch. Practically though, I tried to get instances of differing throttle values at as many different speeds as possible. So I tried to accelerate and decelerate at different rates as fast as I felt comfortable going in the space I had to drive. I repeated the process in some grass to get a feel for how the behavior differed there. Finally, I pulled all the data off the car and started the analysis.

My theory was that for a given terrain, speed, and throttle, the car would exhibit a particular amount of deceleration or acceleration. This assumes that the motor controller is stateless, but otherwise encompasses a range of possible motor controller implementations. At each instant in the data file, I measured the instantaneous acceleration by sampling the velocity some fractional time before and after. Then, I created a 2D grid defined by the possible car velocities and throttles, and for each grid cell accumulated the observed accelerations. This average acceleration for each cell was then plotted using matplotlib’s tricontour function. Below I have included a sample plot from pavement and grass driving.

20121123-acceleration-profile.png

Notably, this plot has no information about regions of braking action. It turns out that it was extremely hard to actuate a suitable range of braking actions from different speeds when driving by hand. Instead, we ended up running dedicated experiments where the car started from a steady state known throttle, then commanded a constant braking throttle until the car stopped. I repeated this for a range of braking values, starting from as high a speed as I was able to do in our back lot testing area. The below plot shows those values.

20121123-regen.png

The data pretty much matched the hypothesis. In fact, the acceleration could be accurately modeled using a simple electric motor model where the throttle input directly controlled the voltage applied to the motor when powering forward. A similarly simple model matched the braking behavior. These models were enough to create a velocity controller which can accelerate and decelerate much faster, while still maintaining position accuracy of velocity. However, before we were able to make it work robustly, there was one other annoyance which I’ll discuss in the next post, motor backlash.

Savage Solder: Improved Speed Performance

One of the areas of improvement we are working on for Savage Solder is velocity control. More specifically, we are improving the performance of the velocity controller at higher speeds and higher accelerations. In our runs during the spring of 2012, we had a simple PI controller on velocity let the car safely run around 5-7mph with accelerations around 0.7m/s^2. Given that the underlying platform can exceed speeds of 40mph and accelerations of 4m/s^2, we wanted to get a little closer to the theoretical maximum in order to reduce event times. Here, I’ll look at why the original controller was limited, and what properties we wanted an improved controller to have.

20121121-throttle.pngFirst, a little background. The motor controller on Savage Solder is the stock HPI Savage Flux controller, configured with a sensored brushless motor. For input, it uses a standard RC servo control pulse width signal from 1ms to 2ms in duration at 50Hz. We logically treat it as having a range of throttle command values of between -1.0 and 1.0. In forward motion, the default configuration has increasing positive values used to increase speed, and increasing negative values to apply braking forces. It can also move in reverse, but requires a small lockout window of stopped time, and in reverse, no braking is possible. Forward commands always result in attempting to move the wheels forward regardless of the current directin of motion. You can configure different behaviors, but these are the factory default and most of the alternatives involve removing reverse capability entirely, which we require. When driven manually, the throttle is very sensitive. Given the car’s very fast top speed, it takes very little movement on the throttle trigger to accelerate the car up to speeds that would require a wide open tarmac to navigate safely. Similarly, it takes some skill to not flip the car over end over end during both acceleration and deceleration. The figure at the right gives a simple representation of these properties. Note that the magnitude of the increasing speed and brake power is not specified and in fact the region near 0 throttle is also undefined.

The legacy velocity controller for Savage Solder was a simple PI controller, with a small state machine surrounding it to reset it and wait the required time upon switching from forward to reverse motion. In this formulation of a PI controller, the integrative term “finds” the appropriate throttle over time, while the proportional term improves the dynamic response by applying extra throttle based on the instantaneous error. Given a long time, and a constant speed command, this controller had no problem converging on that speed with precision. The plot below shows some data from the spring where the car accelerates up to 8 mph. You can see that the actual speed converges on 8mph in a second or so.

20121121-naive-pi.png

The problems with this solution became more apparent at higher accelerations. Since the integrative term requires error in order to equalize, the higher the acceleration, the larger the steady state error during the acceleration. This isn’t often much of a problem during cases of speeding up, but when slowing down it can be a big deal. Our approach for the Robo-Magellan competition (and most other entrants), involved bumping the target traffic cones at low speed. If the speed controller lags in deceleration by several seconds, the car would just plow over the cone rather than stop in front of it. The other major problem is during the transition from a regime of constant acceleration to one of constant speed, especially when slowing down. The integrative term can take a long time to wind back down, during which time the car is completely stopped. In the 2012 Robomagellan event, this was magnified by transitioning from pavement to tall grass right near the end of the deceleration into the first bonus cone. Finally, to mitigate these problems somewhat, we set the P gain to be larger than would otherwise be desirable. The result of this higher gain in the plot above can be seen as small oscillations in the speed and commanded throttle.

When designing an improved solution, we considered the two primary event types that we were planning to compete in:

  • Robo-Magellan: As mentioned above, tracking accuracy on deceleration is a must, otherwise the car could overrun the targets. High acceleration and deceleration rates are useful too, as targets are often in close proximity, not leaving much room to speed up and slow down.
  • Sparkfun AVC: In this event, deceleration accuracy is not as important, but top speed and acceleration are. You basically speed up once, and decelerate briefly for each of the 3 turns.

Next time I’ll look at the experimental path we took, and the next iteration of the velocity controller which we used to improve the performance.