Encoder Front Page
SRS Home | Front Page | Monthly Issue | Index
Google
Search WWW Search seattlerobotics.org

Low Resolution Vision System
Kevin Ross
Seattle Robotics Society
 
Copyright 1996,Kevin Ross All Rights Reserved

Introduction

This article will describe a low resolution vision system with possible applications in robotics. I have longed for the day where I could create a robot that was able to sense its environment using vision. By analyzing visual input, or image, a robot can be programmed to respond in a number of ways. It can watch for motion, detect lines, and perhaps attempt to recognize patterns.

The most common method of generating a visual image electronically is to use a video camera, and some bit of hardware that scans a video frame into a bit map. Typically, the bit map will be created using a gray scale, where each pixel of the image is represented by some number of bits. This allows for a range of values rather than just black and white. A typically video capture system might use 16 to 256 shades of gray in an image. More advanced equipment will allow the system to construct color images. The more information, the more bits per pixel.

While the concepts of using video as the means for vision are relatively simple, the hardware is beyond my abilities, and beyond the abilities of most robotics hobbyists. On a software side, it also turns out to be a fairly large amount of data and a large amount of code to process the data. Since the typical robot I have been working with only has 2k of code, and 512 bytes of memory, I was hoping for something much simpler.

I decided it would be fun to put together a simple way of gathering a low resolution image, and to see what sort of processing could be done with this minimum amount of information. The goals for this project are cheap, easy to build, easy to program, easy to explain. All of the above certainly ruled out video. For a first crack at vision, I wanted to create a system that would detect motion by watching for changes in light. Furthermore, I wanted to have the ability to track the movement. The following is the design I came up with.

 


The Design

I decided that this vision system would interface to a 68HC11E2, and should be easy for me to work with. This placed a few restraints on my vision system. I wanted to be able to gather the data and process it. Therefore, the total number of pixels was going to be limited. I ended up thinking 8, since that's the number of A/D ports that the 68HC11 has onboard.

I needed a sensor which could generate a range of values based on the light level. The obvious choices are solar cells, photo transistors, and cadmium-sulfide (CdS) cells. Solar cells are an interesting idea, and have pretty decent response times. However, the typical range of these was only 0 to 0.5v, which isn't very good. I could condition the signal using some op-amps, but that made the part count go up, and made the idea complex. I also considered photo transistors. They are faster than CdS cells, but they don't have very wide response. They are also more expensive than CdS cells.

Cadmium-Sulfide cells are the choice for me. They are cheap, and easy to interface. In general, sensors that vary their resistance are much easier to interface than voltage sources. A simple voltage divider allows you to bias the sensor to an appropriate range. One potential drawback to CdS cells is the response time. They are relatively 'slow' in response time as compared to other sensors. However, for my purposes, they work just fine. By interfacing the CdS cells to the A/D port on the 68HC11, I ended up with an image that in theory supports 256 shades of gray. Not to shabby.

The next trick was to determine how to create a bit map. My solution was to arrange the CdS cells on a section of breadboard, then to place a lens in front of the array to focus the image. I managed to find a pair of lenses in a surplus catalog that worked out very well. Each lens was about 1.375" in diameter, and a focal length of about 1.5". When put in front of the array of CdS cells at the appropriate distance, I was able to see a projection of the objects in the room right on the surface of the cells. By experimenting with a multi-meter, I was able to watch the image of my hand on the CdS cell change the resistance of the CdS cell enough to be very noticeable. I also found that if I was careful about shielding the array so that only the light from the lens hit them, the response time and range was more than adequate. In a moderate contrast environment, I found that the digital value of my hand waving would vary about 20 ticks or so.

The next interesting consideration was the geometry of the array. It was my goal for the system to be able to track motion in the environment. I wanted to both detect and track. Therefore, I chose a slightly asymmetric layout for the array. Wanting a little more peripheral vision, I arranged a pattern such that the field of view is wider than it is higher. This is the same geometry that your eyes have, where you can see about 180 degrees horizontally, but only about 100 degrees vertically.

Well, it sounded like a great arrangement at first, but it turns out that I was only able to get about a 40 degree horizontal, and about 30 degrees vertical. The reasons are than the geometry of the lenses and the arrangement of the cells I had didn't allow for more. As the subject of the view moved off to the right or left, it quickly ran out of focus, and didn't give the desired effect on the cells. I considered going for a wider angle lens, but was concerned about the fisheye effect you often find in wide angle viewing. This would have caused distortion of the image, and made some of the line following experiments I wanted to try difficult. Given that I was constrained to using 8 CdS cells, I decided on the configuration shown in the figure. In practice, the field of view is quite adequate, especially when watching for movement at distances greater than a foot or two. I also was trying to be careful with the symmetry of the design. The five center cells in the left eye make up the core of the system. The two outside cells of the left eye form the peripheral sensors. There is also a right eye. [They are reversed in the picture because we are looking at the layout from the front. When viewed from the perspective of the vision system, the sense is reversed.] Since there are 8 available cells, but the geometry only required 7, I added the extra eye behind a different lens. This will give us the ability to do really crude range finding, as described later.

Using the 'Seven and one' layout, most of the sensing is done with the left eye, since it can determine direction of light changes both vertically and horizontally. You will notice by the layout that there four very ugly blind spots surrounding the center of the left eye. This is an unfortunate fact of life when using this configuration. I have considered trying a few more layouts, but have not had time to experiment with them. If square CdS cells were readily available, then a tighter arrangement could have been chosen. Using more CdS cells also would have helped quite a bit. I believe that 9 cells would work best, as shown in the figure below.

A better layout

This would reduce the total area of the blind spot. It would also configure the pixels of the image in a better layout for trying to recognize structures in the picture. The downside was that it would require a 4051 for multiplexing the additional A/D source. This would have run over my parts budget, so I have not pursued this or any larger configurations (yet!).

The right eye of the robot ended up to be an interesting twist on this design. By placing the center cell of the left eye, and the single cell of the right eye in the center of the focused image, you can use this eye to determine when you are close to an object. From a distance, the image of an object will come in relatively closely to parallel. Therefore, both centers should so a relatively similar value. As you move the object closer and closer to the eyes, always keep the object in the center of the left eye. At some distance close to the eyes, you will find that the image of the object is centered on the left eye, but is no longer focused on the right eye. The lens arrangement I constructed ended up showing this parallax effect at about 10" from the front of the lenses. This only seemed to work well when watching a light source, such as a little pocket flashlight I was testing with, but it is an interesting idea that might have other uses in the future.


The Hardware

I intended to have this project use a 68HC11 as the main processor. Given that I have been using Marvin Greens 68HC11 based 'Bot-Board' in most of my recent designs, I based this one on it also. Any other 68HC11 based board with 8 available A/D ports should also work just find To my surprise, I ended up with very little in the way of additional electronic hardware. The component list uses a single resistor pack, a 2x5 header connector, and 8 CdS cells. Additional hardware was limited to 2 lenses from Electronic Goldmine, a Radio Shack breadboard, and a 2.5" x 3.5" piece of .25" plywood.

To satisfy my desire to have this be part of a robot, I decided to mount the assembly on a pair of servo motors. I used Futaba S-148 servos. One servo controls the vertical tilt of the 'head', and the other servo pans the head horizontally. The horizontal servo is mounted to the base, and the vertical servo is mounted on the shaft of the horizontal servo using one of the base plates and some hot glue. This allows the head to 'look' around or to track a moving object. Marvins Bot-Board is already set up to run at least 4 servo motors, which means I have 2 for the head, and 2 for main drive wheels.

The connection for each of the cells is made as part of a voltage divider. The particular set of CdS cells I bought had a value in 'average' light of about 1000 ohms. As the light gets brighter, the resistance drops. A direct flashlight beam dropped the resistance to about 50 ohms. As it gets darker, it rises. Total darkness was about 100k ohms. I would suggest testing your CdS cell in what you would refer to as 'average' lighting conditions. The value of the resistor network should be close to the average value of your CdS cell. This will center the average light value at 128 on the A/D converter of the chip.

Each CdS cell is wired using the schematic shown. Each is wired to the A/D port pins PE0-PE7. I arranged the cells on a small breadboard with solder pads on the back. I then used a 1k x 9 resister network to save space on the resistors, and I also used a 10 conductor ribbon cable to connect the head to the Bot-Board.

Click on picture for bigger view (27kb)

The next step is to mount the lenses over the CdS cells. The goal is to have the lenses sharply focus the image directly onto the CdS cells. You will need to experiment with the correct difference for the lenses that you choose. I used .25" oak plywood cut to the same dimensions as the perf-board I mounted the CdS cells to. Carefully center the lenses so the center of the field of view is in the center of the 'eyes', and so the image is focused. I used 6-32 x 2.5" screws in each corner, and a double set of nuts on the perfboard to allow me to adjust the focus. This seemed to work very well. I determined where to center each lens by experimenting with a flashlight at 6' away, and focusing the bright light in the middle of the CdS cells. I then repeated the experiment at about 12" or so, to make sure the parallax rule worked. Because you may have different lenses, you will need to experiment with lens position.

Mounting the head to the servo motors is very dependent on your implementation. I created my head using Lego Technics bricks. A very important step is to shield the CdS cells from stray light. This system works best if the only light allowed to reach the CdS cells is light transmitted through the lenses. I enclosed the entire head in black cardboard so the only light was from the lenses. This seemed to work well.

For pin assignments, I used the following pins on the 68HC11

Click on picture for bigger view (27kb)

 If you are interested in a schematic, here are two versions. I have printed the GIF file to a printer, and it appears to work fine. The Postscript file works with GhostScript, or should work on most postscript printers.


The Software

The software to control the head was written using Karl Lunts Tiny4th compiler for the 68HC11. I used a 68HC11E2 in the Bot-Board to get 2k worth of code space. This is plenty for experimenting. There are many different experiments that you can try. My first experiment was to get the head to move in the direction of greatest change in light. The theory of operation is that if something moves, then the light characteristics are going to change in that area. The bigger the change, the more apt the head is to look at it. The software is very simple.

begin
	Take a light reading called the Control
	begin
		Take a new light reading
		Calculate the difference between the new reading and the control
		Add the magnitude of each difference to the DeltaResult
	until Some period of time expired
	Look at DeltaResult and move head towards largest accumulated result.
repeat

Some potential variations on this are to move the head in the direction of the brightest or darkest area, give priority to lighter areas. The software is available in source file form.

Another variation is to get the robot to follow lines. Using the 5 center cells of the left eye, you can follow a line fairly well, if the line is big enough. With my head, the robot could see a 1.5" black line on white paper fairly easily. You will have to experiment with the size of line your robot needs. Remember that this works well in a high contrast environment (black and white).


The Fun

Even with the minimal 'dumb' program installed, the LRVS robot is quite a bit of fun to play with. If left alone for a little while, it will eventually focus in on whatever is moving. As an experiment, I put the robot in front of a big screen TV and let it watch Star Trek for an hour. It was very entertaining to watch the robot follow Data across the 52" screen. It was very good at picking up major movements on the screen. It is equally entertaining for children, who love to play with it. The way that the two lenses are mounted on the head gives the robot a certain amount of personality that is hard to resist. Its amazing how much attention this little project attracts where ever I bring it out for a demo.


Expanding on an idea

I believe that with a good set of lenses, and a larger array of CdS cells (5 x 5 = 25 cells), this low resolution system could make the start of a very interesting system. With just that much better resolution, you should be able to follow smaller lines, or be able to find small patterns. What is needed are smaller CdS cells, and an analog multiplexing system to address each row and column. Another idea might be to replace the CdS cells with photo-transistors, which fit into a smaller space. I believe that 8x8 is also very possible using photo-transistors and a set of analog buffers. A 64 bit picture actually gives you quite a bit of information to deal with.

 


Resources

Marvin Greens BotBoard can be found on his website.

More information about the Seattle Robotics Society.

To find the Tiny4th language, check out Karl Lunts homepage.

If you have questions or comments, send me mail: kevinro@nwlink.com