Encoder Front Page
SRS Home | Front Page | Monthly Issue | Index
Google
Search WWW Search seattlerobotics.org

The Prez Sez

Kevin Ross

[This is a rerun of an article I wrote for the original paper Encoder. It seemed appropriate to run it again]

One of the treats I had while working at Microsoft was the annual trip back to MIT for the 6.270 lego robot competition. There are few things like intense concentrated competition to sharpen ones engineering skills. One of the great parts about doing this for multiple years is that I was able to see over time some of the common mistakes that many people make while building robots. You might remember my last article about keeping it simple. That is certainly the single most important lesson learned from this experience.

The second most important lesson learned was something that I have also been very keen on in my professional life: Verification. By this I mean that all computer programs make assumptions about their current state, but many do not have provisions built in for verifying that the assumptions are correct. Robot programs are no exception, and indeed show this in spades.

When you write software on a PC, you have alot of control over the state of the environment in which your program runs. You can, by careful analysis, insure that conditions for calling subroutines are correct, that data structures are kept in tact, and that the user is unable to interact with your program in a way that would make your assumptions invalid. A wise and experienced programmer usually has a few tricks up his sleeve to insure that the assumptions are valid. We put in special code, called Assert() statements, that during debugging verify that the state of the software environment meets our assumptions. When an Assert() detects a problem, it stops right there so we can debug the problem.

With robots, since you are interfacing with the cold harsh world, you have a more difficult time controlling the environment in which you operate. Not only do you have a software environment to take into consideration, you also have a physical environment that is much less predictable.

For example, you have outfitted your robot with wheel encoders to determine to a great deal of precision how far your robot has travelled. According to your master plan, if the robot travels .25 meters forward, it will find an object that it can pickup. Your software carefully watches the wheel encoders, the wheels turn the appropriate amount, you stop, and grab the object. Seems simple enough. However, by a twist of fate, the floor ended up being dusty, and one wheel slips a little, causing you to be off course. When you get to where the object should be, it isn't.

From my experiences watching 6.270 competitions, at this point, the robot is lost. I always am amazed at what happens next. If the contestants are lucky, the robot will start backing up and run into some obsticle and get stuck. If they are not lucky, then the robot will start backing up and fall of the edge of the table (it happens often!).

What could have been done? Verification. The only sure way to tell if something in the physical world has actually happened is to measure it. When the robot travelled forward to find the object .25 meters ahead, it should have stopped, sensed that the object was there, then picked it up. If the object wasn't there, then something that the program has asserted is false, and the robot needs to recover. In 6.270 competition, the robot is definitely lost at this point. A simple solution is to stop. A more complex one is to reorient and retry.

The robot we built this year was actually quite robust:

When it was travelling, the robot would pass over black lines on the floor. We used down looking IR sensors to detect the edges of the lines. Verification that we are actually moving.

Each line we crossed, we carefully detected the edges to align the robot so we could verify and correct that it was going straight. Verification that we are still straight.

Our manipulator had sensors to detect when our object was in position to be picked up. Verification that we reached our goal.

The manipulator had sensors to detect when it was closed, and positional sensors to verify that it had actually moved to pick up the block a certain distance. Verification that we had lifted it far enough to place on the scoring pedestal.

We had sensors to detect the limits of the robots range of motion, which kept use from worrying about moving the manipulator too far. Verification that we were not breaking anything.

We used an assertion mechanism to stop in case of trouble. Verification that we were going to back off the table!

I encourage you to think about how to instrument your robot to verify that you are doing what you think you are doing. This instrumentation is partially in software using assertion statements, and partially in hardware to measure physical assumptions. Hopefully, this will help you both debug your robot and keep it from harming itself (or others!).