Encoder Front Page
SRS Home | Front Page | Monthly Issue | Index
Search WWW Search seattlerobotics.org

'A Real-time Laser Range Finding Vision System'

By: Kenneth Maxon

The author's articles of months past focus on the application of 32-bit high-powered processors to amateur robotics and more recently applying CPLD technologies to the same. Continuing on this high-tech trend, this article will cover the concepts and implementation of a Real-time Laser Range Finder based on CPLD Processing of NTSC Video for use in small mobile robotics applications.


Through the use of a graphical model based approach this article will introduce several of the basic concepts in the physical world that govern this type of vision system. The focus will then shift to video systems, their organization and formatting. These two will then be combined to present the reader with an implementation concept designed to allow the reader to see these concepts through the, "eyes" of this sensor system. Next the article will introduce the underlying algorithm behind the system. Following this, a detailed description including schematics and code of a particular implementation after which application information will be presented. The article will wrap up with two discussions, the first on errors that contribute to false sensor readings and the second with future plans and some references.

It is the intent of the author through the vehicle of this article to provide a sound basis for future experimentation and development of vision systems. Additionally, this article will bring real world examples to evidence that this technology is within the grasp of amateur roboticists, not just seasoned professionals.

For those readers interested in other approaches such as Game Boy camera and offline software implementations, this article still retains relevance due to its content level on geometry and parallax issues.

Section:1 (Physics & Parallax)


Laser w/field generator optics and NTSC compatible camera.  Camera focus is adjusted by screwing the lens in and out of the housing.

Websters [Ref.9] defines parallax the following two ways. 1. The apparent displacement of an observed object due to a change in the positin of the observer. 3. The difference between the view of an object as seen through the picture-taking lens of a camera and the view as seen through a separate viewfinder. Both of these definitions apply directly to the graphical models the article will develop in this section. These models will then fully complete the definitions and also bring a firm beginning to the operations of the laser range finder to the reader.

In order to accomplish this the main components used to implement the system must be identified.

In image:1.1, the reader can view the two main players in this sensor. To the left of the image is the laser with beam generator optics attached. On the right is the NTSC (CCIR601) compatible video camera. These two components (and a little bit of electronics to be discussed later) make up the basis for the laser range finder sensor. With the image of these two in mind, the reader is now armed with all the information necessary to interpret the diagrams that follow.

The laser chosen operates at 671nm, which is right in the middle of the eye's most sensitive frequency range. This in contrast to an infrared laser or other frequency laser causes a physiological response from the viewer. (blink). Combined with the choice of a low power device these two decisions afford significant safety benefits. An additional benefit in using visible light is its extreme help in system troubleshooting/debugging.

An overview, Diagram:1.1 shows how the laser is mounted on a mobile robot, 1/2" above and parallel to the floor. The Camera is mounted 5" up with a downward angle of 22.5 degrees. As well see in diagram:1.4 the swept angle of the camera is 48deg. These two (camera & laser) can be identified in diagram:1.1 as the two black objects on the right side of the image. The backdrop grid lines (gray on white) that make up the walls and floor in this scenario are on 1" centers. In later orthographic projections the reader can take advantage of this fact to read system metrics directly from the diagram.

Spacing and mounting considerations for the laser and camera come from a convenient size and space available on the author's robot. A later section will outline the relationship between mounting distance/angle and the effects the have on the sensors returned values.


Scene setup overview. Camera:(upper right, black) Laser:(lower right, black) Camera Cone of Vision:(blue) Laser Field:(green) Target Blocks:(Red) Background Walls:(white w/grid)


Feild Generator Optics (diffraction-type)

Diagram:1.2 demonstrates how a single coherent beam of laser light is diffracted into an angular field. These diffractive optics when attached to the front of a standard laser generator produce the feild necessary for the vision system.

The next image, diagram:1.3, is a top down orthographic projection. The laser (black, right) the three targets (1" square blocks, red) and the field swept by the optical beam generator affixed to the front of the laser (Green) will be shown. To the visible eye (and also the video camera) the laser light only colors broken line segments indicated by the heavy blue line. The white areas in the diagram depict areas where the sensor system does not have coverage.


Laser field fanning diagram demonstrates areas of missed coverage.

In diagram:1.4 a "cone of vision" shaded light blue is shown. Again, the three red block targets are visible, and the heavy blue line that traverses the entire image represents the edge view of the field swept by the optical laser field generator. An interesting note: in the background grid color with-in the camera's vision cone, a fine line parabola traces out the intersection between the camera's vision cone and the far wall. Just as object detection is limited in the image above by the areas swept by the field generated by the optical laser field generator, so to is detection of objects limited by coverage of the camera cone. This functionally apples to a robot in the following fashion. An object 1-foot away and six to the side of the centerline of the vision system will easily be detected. That same object, 4 inches away and 6 inches off side the centerline of the vision system will be out of range. There are also complex issues of shadowing by objects in the camera cone that the article will address, in section.6 errors/artifacts, further below.


Side orthographic depicting cone of vision, minimum sensing distance and cone/backdrop intersection.


Range Descretization:
  • 6.75" = 0x00
  • 11.75" = 0x80
  • Infinity = 0xF0

The returned values can be extrapolated directly from the cone drawing and grid reference of Diagram:1.4 (above). Note: These are not linear.

It would appear that a quick fix to the limited camera cone coverage close to the front of the robot could easily be addressed by greatly increasing the downward inclination of the camera. Care must be taken, as this will begin to quickly limit the maximum distance that the sensor can measure, as the top edge of the cone will begin to dive into the floor.

It turns out that the better solution is to buy a camera with a wider cone angle. Another solution appears to come from moving the camera back further on the robot, however this introduces a special form of error discussed later. (Section.6 Errors/Artifacts - "Axial cone misalignment") As the reader will see later in this article, it is the perpendicular distance separating the camera and laser that comes into play, and that distances measured by this system are relative to the camera.


Image without parallax as viewed parallel to the field of laser light.

The laser's view of the world. Diagram:1.5 demonstrates a view of the world if the viewers eye could look at a scene directly out of the laser beam. The important concept to take away from this diagram is that since the viewers eye is in the plane of the laser field there is no parallax between the field and view. Parallax is the primary physical property that will allow us to apply some simple mathematics that allow the vision system to sense depth.


Scene view through the camera.  The effects of parallax are clearly evident here.

Study this image carefully and make sure that you understand the viewpoint being demonstrated is that of looking directly through the laser tube. Once the reader under stands this then the view in the next diagram will demonstrate the properties of parallax more fully.

This next diagram:1.6 is identical to that depicted above (diagram:1.5) however we have shifted our point of view from looking down the laser tube to looking through the camera. Notice here that the visible laser light (heavy broken blue segments) that formed a straight unbroken line in the diagram above are now discrete, broken line segments. These two diagrams (1.5 & 1.6) demonstrate, graphically, the principle of parallax.

Without needing to drop to a necessarily mathematically deep description, we've covered the physics that govern the laser range finder system. Don't worry, before section.3 the pertinate equations will be developed for those readers not satisfied with graphical solutions.

This section started out with a pair of definitions for parallax. Reviewing how the laser line has two different shapes between diagram:1.5 & 1.6 and thinking about the different view points discussed in the definitions the reader should begin to appreciate the effects of parallax.

Section:2 (Video & Filtering)

A bit of a change from the flow above, this section of the article will switch gears and look at NTSC video (CCIR601) and a bit of formatting. This topic will then flow into the discussion above with a demonstration of how that formatted video over lays on top of the physical model presented above. After this the article will explore the third system component, optical filters, and their application to the vision system.


Visible here are a few video lines of data which, for monochrome video, is simply an analog wave form representative of the video level scanning across each video line separated by negative voltages representing the horizontal syncs. [Ref.3]


Shown here are some of the non-visible video lines that describe the vertical retrace and transmit extraneous data not required for our application. The square low going pulses are the horizontal sync signals between video lines.

Diagram:2.5, below, depicts a video screen with a number of video lines arranged that the reader may refer to through out the discussions on video. CCIR601 video specification (1982) discusses two video formats [Ref.1]. The first is 480 active lines for a 525 line system which is commonly used for broadcast quality NTSC and the second is 576 active lines for 625 line systems which is found in studio quality NTSC systems. Active video lines are those that are visible on the display screen. The first (480 active lines) is the basis used for calculations of video compressions in the MPEG-2 specification. Both of these systems use a horizontal luminance-sampling rate (pixels) of 720 samples/line. Since the standard also defines an interlaced video format at 60Hz this means that the effective line count per frame is then further divided by 2. (240, 288) We will dispense with the discussion of chrominance here as this system is implemented with black and white video.

An often-asked question is how to interpret the above information. The easiest analog is to imagine a LCD screen with a horizontal resolution of 720 and a vertical resolution of 480. The information to draw on the screen is presented 60 times a second. The trick here is that every other time only the odd or even lines of information are sent. This means that first, line 0,2,4,6,8, ... 478 are sent. Then when the following frame is sent lines 1,3,5,7, ... 479 are transferred. This means that two complete frames need to be sent before the whole image is transferred thus reducing the effective frame rate to 60Hz / [2 (every other frame)] = 30 frames per second.

Since the data in each progressive frame is different spatially the system used (and in fact the one that the author will implement here) can treat the data as video stream of [720x240x60Hz]. Engilsh translation: Data is sent in two seperate groups, odd lines and even lines. It is, however, important to realize that both the odd and even frames are sampled simultaneously in the time domain and as such will lead to an update rate of only 30Hz. This fact will be addressed in software, later.


Video screen format and line/data orginization.

Diagram:2.4 depicts a single line of video data scanned from left to right across the screen/camera image. The negative going pulse on the left and again on the right are the horizontal sync signals that the video system uses to convey the beginning/end of line information. In analog televisions these actually drive circuitry to move the beam drawing the picture back to the left edge of the screen in preparation for drawing the next line. The image comes from the Circuit Cellar magazine. [Ref.8] The article on image digitizing has an excellent, concise coverage of video formats.


Details of a single video line within a frame.

Stated above, the number of video lines is 525 where the active lines acount for only 480 of these lines. This means that there are lines sent at the beginning and end of each frame that are not visible on the screen. These lines contain timing information such as the (legacy) vertical timing as well as several lines now used to convey channel information, closed caption information, electronic program guide, etc. It is not important to understand or decode the data present on these lines in this application. Rather, understand that they are present, that they take the same amount of time as other lines in the video frame, and that they must be skipped over to get to the active video lines. Some of these lines are visible in diagram:2.3, above-right.


Depiction of rotated screen used in this sensor system.

A rough feel of the information on the video stream in-hand, the next diagram (right) shows an implementation detail. In the laser range finder vision system built by the author. The camera is turned 90 degrees counter clockwise. This change in geometry has a specific effect on the perceived coordinate system as follows:

Table:2.1 - Video Frame Format

  • Skip 25 lines of inactive video at the left(top) of the screen. (This step is independent of camera orientation).
  • Video line numbering start at the left side of the screen and progress to the right side of the screen.
  • Video pixel numbering starts at the bottom of the frame and increase in a bottom to top traverse of the video line.

It's not clear yet, but this turning of the video image will greatly simplify the task of finding our data within the image. More to follow below...

With this definition of the video system and its associated rotated coordinate system, the physical model developed in section.1 of the article can now be overlaid. The diagram below shows how the video coordinate system applies to the physical world layout. Study it carefully paying close attention to the mapping of the coordinate systems between the video and the real world.

Diagram:2.7 & 2.8 are the images that depict all of the coordinate mappings for this system. Take an extra moment to make sure you understand these mappings before proceeding. To aid in this thought process, locate pixel-720 of line-480 in the diagram.


Application of the rotated video screen to scene developed in section.1 (above).


Virtual view of the scene as depcited in the rotated video screen.

Diagram:2.8 renders the viewpoint looking directly through the screen in the scene presented above. Of course the camera could not be seen in the real world, nor the feild of laser light.

Following, some of these items that are not really visible will be removed to clear up the image. Additionally, a quick review of optical band-pass filtering will be inserted to aid in tying all of the pieces together and help convey how some of these items can be removed from the video image.

The next piece in the puzzle is that of optical filtering. Optical filtering is one of the front line tools used to remove extra-unwanted information from the video signal.

The particular type of filer used here is an optical band-pass filter. It has extremely high 'Q' which means that it passes only a very small frequency bandwidth of light. In designing the system, a frequency pass-band centered on the fundamental frequency of the laser used has been chosen. This means that a 671nm pass-band filter has been chosen to use with the 671nm laser and field generator optics. [Ref.7]

The optical band-pass filter is placed in front of the camera lens to block out reflected light from any source other than the laser itself. This approach is not, however, foolproof. Many light sources contain a percentage of 671nm-wave length and will show up dimly in the output. Sunlight in particular contains a substantial amount of this wavelength. A more detailed list and explanation of errors associated with this sensor will follow in Section.6 - (Errors & Artifacts).

Optical band-pass Filters:


Optical band-pass filters

The High-Q (Also called narrow bandwidth) optical band-pass filters shown above are 50% down at only 7nm off center frequency.


Camera w/optical filter attached.

Here the camera with optical filter(red) attached to the front are depicted


Diagram of optical filter functionality.

Depicted symbolically here, the band-pass filter passes the 671nm wave length of interest while blocking other spectral content. 671nm is the spectral line given off by LI atoms.

The optical band-pass filter used comes from [Ref.2], call for a catalog of industrial optics:
Edmund Scientific
101 Ease Gloucester Pike
Barrington, NJ 08007-1380


Transmisivity Graph

This graph delineates the variables required to fully specify an narrow range optical band pass filter. In this application an extremely narrow FWHM (Full Wave Half Magnitude) is desirable although this is closely related to the cost of the filter. These filters range from $80 -> $200 for commercial grade and go much higher for laboratory quality.

The two diagrams following (2.12 & 2.13) show the relative effects of optical flittering in a perfect world! The left frame shows the image with our target blocks as seen in black and white video before filtering and the right shows the output that has been filtered to remove all frequency components other than those close to 671nm. Later in the section on Electronics and Implementation this article will review thresholding, which can clean up some of the mid level noise that gets through the optical filter in a non-perfect world and would otherwise cause noise in the data.

In the right diagram (below) the user can see the correlation between 'Y' position (pixel location within line) and distance from the camera. This is the base conversion that allows this sensor to work. Remember that the screen is rotated and thusly the video lines traverse, bottom to top.


Diagram of video display without the optical band-pass filter.

Before Optical Filter


Diagram of the video display with the optical band-pass filter.

After Optical Filter

The information that the sensor needs to extract from the video image can be summarized as follows: The sensor needs to determine where within the video frame, spatially, the light from the laser field is imaged. From the diagram above we can infer a new coordinate system that identifies relationships between the laser light and the spatial content required by the sensor.

Coordinate transformation rules.

There is one more coordinate scaling procedure that, when applied, makes this system easier to quantize in a computer system. The video format we've been discussing in this section of the article has 720-'pixels' x 240 - 'lines' x 60 frames/second. Although the line number ('X') is easily represented by a single byte wide variable to meet system speed requirements, the pixel count ('Y') variable requires numbers out of the range [255..0]. This puts a burden on many systems requiring larger data types and more code/time to process.

An easy solution is to change the timing structure that determines the sampling of data (pixels) along the video line. By simply using a slower clock and sampling the video signal line less frequently, the number of 'pixels' sampled can easily be reduced to 255. The active video portion of any give video line is 51.2us [diagram:2.4] and this value divided by 256 is close to 2us. For simplicity the use of a 5MHz clock leads to a 2us sampling rate, more than close enough for our uses.

Note: An additional benefit is that 5MHz is easily obtained from an existing clock in the system which will be presented later.

Table:2.2 - Rule Base

  1. # of Lines = 240
  2. # of pixels = 256
  3. Sampling frequency = 4.000 MHz
  4. X=Line count from beginning of screen
  5. Y=Pixel count(time) from beginning of line.


Definition of variables used in the parallax equation.

This article focuses on a graphical model approach to determining range information from the returned data. As such I've put quite a bit of time and effort into developing the diagrams presented here to illustrate this model. I know however that there are those of you out there yelling for the mathematics behind the model to arrive at an exact answer. Working from a triangulation basis [Ref:4] and applying it to the graphical model presented here diagram:2.13 can be used to describe the constraints of the middle pixel in the screen.

The equation that then governs the range to the middle pixel on the screen is given in Equation:2.1. Understand that in these examples, Vert is the vertical separation between the laser and the camera, which in this example is 4.5". **Remember, Range is the distance from the camera, not the laser!


Simplified parallax equation that relates the geometric variables in the sensor.

Note: this is not the general or open form solution for the parallax calculation presented in the reference. Here it has been greatly simplified due to specific constraints placed on the system by design. One of the simplifying assumptions made is that the distance measured for Vert in Equation:2.1 is perpendicular to the plane generated by the laser. Also, this equation governs only that data returned by the central pixel sweep (video line) lying directly in the middle of the screen. Other scaling factors are required to correlate the data that comes from other video returns. In the applications section below I'll discuss calibration for other "off-centerline" sensor returns.

The plot depicted in diagram:2.14, below, shows the values that will be returned as the angle "theta"-(Refer to diagram 2.13 for this variables setup) sweeps from 0 to 90 degrees. The range information along the 'Y' axis of the plot is in inches. What a wonderful distribution of data looking out to 50". The exciting part of this image is how many points are right at the closest edge of the vision cone and how closely they are spaced. The model also show returns out at 300+ inches, although in implementation it remains to see if there is enough power in the laser's return values that pass the optical filter and threshold function. Of course the returns near 90 degrees are infinitely far out there as the function is asymptotic at theta = 90deg.

The correlation important to the reader is that each value or 'pixel' along a video line is at a specific geometry or angular relationship that can be looked up in diagram:2.14, below.


Graph depicting the relationship between angle below the horizontal and the range returned by the sensor.

Generated w/Mathematica

Section:3 (Algorithmic Processing)

Up to this point the article has reviewed the physics and parallax aspects of the system followed by video and filtering. At this point all of the tools are present to review the algorithm that implements the vision system. Following this, the implementation details including an electrical overview will be presented which will lead into a discussion of applications.

In section:2 the system resolved down to the following two definitions:

Once the system has been defined down to such a low level the development of an algorithm to process this information simplifies quite a bit.

Listing:3.1 - The algorithm:

  1. Reset video line counter ('X')
  2. Reset pixel counter. ('Y')
  3. Store default data value for this line. (no data found)
  4. Compare input to see if it contains laser information?
    • If laser is present replace no data found with current 'Y' location & store.
    • If laser is present goto #6 below.
    • if laser is not found, increment pixel counter and goto #5 below. ('Y++')
  5. Compare the pixel counter to the end of line? ('Y <255')
    • If the end of line is reached go to #7 below.
    • If not the end of line go to #4 above.
  6. Compare the pixel counter to the end of line? ('Y <255')
    • If the end of line is reached go to #7 below.
    • If not the end of line loop back here and wait at #6
  7. Increment line Counter ('X++')
  8. Compare line counter to Max Lines? ('X <240')
    • If last line then goto #8.
    • If not last line then goto #2 above.

A quick English translation: There are two counters initialized to zero. The first counter counts 'Pixels' or 'Y' units up the screen. When/if the laser beam is detected, the value of this counter is stored as the return value for this line and the system waits for a new line to start. The second counter stores the current line number being evaluated which determines the storage location ('X' location) of the pixel on the screen.

In control theory one of the more complex issues deals with the timeliness of a correlated response to an action. [Ref.5 & 6] Here that issue comes into play as data being received by the system is updated at a 60Hz rate, however data begin sampled from the world is only retrieved at a 30Hz rate. As such there is a time disjunction that the programmer must be aware of. One can imagine how it would appear to a small robot slowly approaching a wall while watching the returned values from this sensor. A stair step result may be observed, instead of the smooth linear approach expected. In the author's implementation, software was used to average the two returns. This could have easily been done in hardware. Additionally, as there is hardware feedback to the system, only odd or even field sampling could have been applied to reduce the effective data-sampling rate down to 30Hz. There are many creative ways around this, as such it is just an effect that must be understood and addressed.

The algorithm above is specifically written such that it does not infer a particular implementation. There are many solutions that could implement this algorithm effectively including but not limited to discrete electronics, microprocessor/software combinations, Digital signal processors and finally programmable logic (CPLD's & FPGA's).

If you've taken the time to read some of the author's previous articles on processor systems and programmable logic, you'll notice that I lean heavily towards the CPLD/FPGA solution. One design goal is to relieve the processor from high data rate real time tasks. (Listing:4.1 has more of the design considerations) In this design I've used a CPLD to build the system, however, let me stress that a fast, embedded processor could have been used to implement this same task. In the next section all of the implementation details will be presented:

Section:4 (Implementation & Electronics)


Design Criteria:

  • Fully self contained sensor system for range finding.
  • Interface must be able to support many generic processor families. (8051,6811,68332,ColdFire,TMS320)
  • All data processing to be done real time by sensor. (IE. No after thought software processing required)
  • Power consumption concerns. (Battery operation)
  • Physical Size (less than 1.5" square for final implementation)
  • Safety. (Choices of laser equipment must be completely safe for the range to include accidental exposure or even prolonged exposure.
  • Cost. (Not a consideration, this is a hobby :-)

At this point, The physics, video processing and algorithm's used to perform parallax based laser range finding in general have been reviewed. In this section the specifics of the implementation and electronics support used in the authors system are described.

So far the article has defined several of the electrical, mechanical and optical components required to implement this system, reviewed here:

A conventional way to implement the third item (the control component) listed above would require the use of a video capture system and microprocessor to search through the stored data after the frame had been captured and transferred to a buffer. In the system built by the author, another approach has been adopted. Specifically, the control unit takes the form of a CPLD operating a state machine that tracks video real time. It then converts the video data to range data, applies the correlation developed above and stores a small amount of processed range data into a buffer for use by a microprocessor. This off-loads all of the processor software overhead that would be required to retrieve and then process the data. The following diagram presents a block level overview of this process:


Block outline of the author's sensor system.

Following are the actual schematics for the author's implementation of the laser range finder:

First, the analog video front-end reference circuitry and laser detection circuitry. In the upper left of the schematic snipped the 4-pin header that connects the video camera to the board is visible. The four signals present on this connector are a power and ground pair and a video and return pair. Another portion of this schematic snippet to pay close attention to is the temperature compensated stable voltage reference. This reference is used to further provide a thresholding function in combination with the LM311 comparator. This function limits low level signals that make it past the optical band-pass filter from causing false triggers.

A portion of the analog video front end processing in this first schematic snippet comes directly from a design presented in the Circuit Cellar magazine. [Ref.8] The circuitry used from the Circuit Cellar is located in the upper half of Diagram:4.1 only. All other circuitry and implementation details are original work by the author.

Diagram:4.2 - (Analog Front End & Threshold)

Schematic snippet, analog front end.

Diagram:4.3 - (Sync Separator)

Schematic Snipped, sync detection circuitry.

Next, the horizontal and vertical sync detection circuitry. Note: Although the odd/even output and the burst output are connected to the CPLD as seen in this schematic snippet and that of the CPLD schematic below, they are not used in this design. For further reading see [Ref.3]

Diagram:4.4 - (Laser Drive)

Schematic Snipped, laser drive circuitry.

This third image (diagram:4.4) shows the driving circuitry for the laser beam generator. The laser has internal power feedback mechanisms so all that must be limited is the maximum voltage applied.

The next image (diagram:4.5) is that of the CPLD itself. The connections shown to the upper right were used in debugging with a digital logic analyzer and are not implemented in the final version of the CPLD code, available below. Connections made to the 8-pin debugging header in the lower right corner were also used for debugging purposes and, again, are not needed in the final implementation.

As mentioned above, this system has only one physical clock hooked up. The second clock is a sub-multiple of the primary frequency generated within the CPLD. It is then routed externally and back to an open clock input pin. This is done to take advantage of the timing delay of the output drivers and input capacitance guaranteeing a minimum 7.5ns delay. Keeping this clock out of phase with the fundamental goes a long way to reduce device power consumption, and also eases setup and hold considerations. A working knowledge of the complex issues of timing and routing in CPLD's starts to become a bit more important when designs of this level of complexity are approached. Still, this technology is well within the grasp of the amateur, although it requires the application of a bit of gray matter.

The full listing for the firmware that defines this CPLD is given in Listing:4.2 (below).

Diagram:4.5 - (CPLD & Test Points)

Schematic snippet, main page, CPLD implementation

Diagram:4.6 - (Dual Ported RAM)

Schematic snippet, dual ported RAM.

Last, but not least this final schematic snipped shows the interface to the dual-ported RAM. All connections on the left side of the IC connect to the CPLD schematic while the right side nets connect to the host processor system. The exception to this is the /CER (Right side /Chip Enable) signal. Address decoding that takes place within the CPLD generates it. The busy and interrupt functions are not used nor implemented.

Now we've looked at the hard implementation details all that remains is the CPLD code required to drive the system. I've included the code as implemented below, removing only a few lines that interface two devices not required/used in the Laser Range Finder portion of the vision system:

Link to Encoder Article, Getting Started w/CPLD's

Getting Started w/CPLD's

Do you want to learn more but don't know where to start with CPLD's? As well as the rest of the information on this page, I've written a few other articles for the Seattle Robotics Society's online magazine the Encoder. Click the read more link to the right, which will take you to an introductory article on CPLD's.

Listing:4.2 - CPLD Code:

TITLE 'v_1'

L_RESET,L_WR,L_RD PIN 39,40,41;
A15,A14,A13,A12,A1,A0 PIN 51,50,49,48,46,45;
D0 PIN 53;
D20CK PIN 20;
L_D7,L_D6,L_D5,L_D4 PIN 76,77,78,79 ISTYPE 'REG_D,BUFFER,KEEP';
L_D3,L_D2,L_D1,L_D0 PIN 80,81,82,83 ISTYPE 'REG_D,BUFFER,KEEP';


H,L,X,Z,C,U = 1,0,.X.,.Z.,.C.,.U.;

ONE =		[0,0,0];	//0
TWO =		[0,0,1];	//1
THREE =		[0,1,0];	//2
FOUR =		[0,1,1];	//3
FIVE =		[1,0,0];	//4
SIX =		[1,0,1];	//5
SEVEN =		[1,1,0];	//6
EIGHT =		[1,1,1];	//7
READ_MEM =	[0,1,1,1,X,X,X,X,X,X,X,X,X,X,X,X];
ON_LASER =	[0,1,1,1,X,X,X,X,X,X,X,X,X,X,0,1];
ON_HEART =	[0,1,1,1,X,X,X,X,X,X,X,X,X,X,1,0];
LEFT_DATA =	[L_D7,L_D6,L_D5,L_D4,L_D3,L_D2,L_D1,L_D0];
LEFT_ADDR =	[L_A7,L_A6,L_A5,L_A4,L_A3,L_A2,L_A1,L_A0];
SDGM =		[ST2,ST1,ST0];
ADDRESS =	[A15,A14,A13,A12,X,X,X,X,X,X,X,X,X,X,A1,A0];
CNTER =	        [CT1,CT0];



OUT_10_CLK = CT0;
OUT_5_CLK = CT1;
LEFT_DATA[7..0].D = LEFT_DATA[7..0].Q + [0,0,0,0,0,0,0,1];
TEMP2 = CT1 & OC1;
LEFT_ADDR[7..0].D = LEFT_ADDR[7..0].Q + [0,0,0,0,0,0,0,1];
L_CE_R = !((ADDRESS == READ_MEM) & (!L_RD # !L_WR));
L_LEFT_WR = 0;
L_LEFT_RD = 1;

        L_CE_L = 1;
        OC1 = 1;
        L_CE_L = 1;
        OC1 = 0;
        GOTO THREE;
        OC1 = 0;
        L_CE_L = 0;
        GOTO FOUR;
        OC1 = 0;
        L_CE_L = 0;
        GOTO FIVE;
        OC1 = 0;
        L_CE_L = 0;
        GOTO SIX;
        OC1 = 0;
        L_CE_L = 0;
        GOTO SEVEN;
        OC1 = 0;
        L_CE_L = 1;
        OC1 = 0;
        L_CE_L = 1;

With the link (above) taking the reader to the first of two articles I've written on designing with CPLD's, I'll not take the time here to review the code above line by line.

The following are images of two actual implementations of parallax based laser range finders designed and built by the author.

Prototype Daughter Board

Image:4.1 - (Prototype vision System)

Vision system working prototype development daughter card.

The image to the right shows one of the author's early vision system prototypes. This board is designed as a daughter board that plugs into a processor system through the vertical stacking connector (front/left). Clearly visible on the right are the PLCC style Lattice ispLSI1032 CPLD chip and the IDT7130 Dual Ported RAM. The two silver cans (front left) are a pair of clock chips, which in later versions were replaced by a single clock that is divided down by the CPLD. Along the back edge of the board is a large amount of analog circuitry for front end conditioning of the video signal. This system works, but has problems and much of it has been redesigned/simplified in the advanced vision system, which is presented below.

To read more on this early prototype version of the Laser Range Finding Vision System or other robotics related toys, visit the authors web site (Controler-#5) listed at the end of this article.

This system does not use the LM1881 sync separator IC. Instead it uses op-amps and comparitors along with a handful of discrete components. Although it was a "neat" design, the LM1881 replaces enough components to warrant its use in the new design.


Board Technologies:

  • 33MHz-8bit optimized processor
  • 64-KBytes EPROM
  • 512KBytes FLASH
  • 32KBytes RAM
  • IDE Hard Drive Interface
  • 2xSerial ports w/RS232 level shift circuitry
  • Self contained vision system w/4KBytes Dual Ported RAM ***
  • 2x Self contained graphical display system w/16KBytes Dual Ported RAM (1.EL, 1.LCD) ***
  • 4 Board level push button inputs
  • 24 Generic Re-programmable IO for motion control and user input.
  • 8 AtoD ports
  • 4x pushbuttons for local control
  • Vertical stacking expansion bus with all processor bus signals and IO
  • on board switching power regulation circuitry to generate all required voltages (+5,+9,+12,-5,-18.5)

Advanced Vision Board

The image below shows the main board for the authors, 'Advanced Vision System' project. This system has components for many different functions onboard. The actual size of the vision system is only a fraction ~1/6 of the total board area outlined in yellow (Image:4.2). Further reductions could drastically reduce this size further by implementing a surface mount double-sided version. Of course I'll be building exactly that for implementation on my small robot.

The high component count in Image:4.2 below looks quite oppressive. Fear not as this board had circuitry for many other functions as indicated by the board technologies (listing:4.2). The end application for this development process is two fold. The first is to design, prototype, debug, refine and implement a working laser range finder. The second is to have this substantial investment in time and component resources prove functional in some way. The intended application for this unit is for that of 3-D object scanning when used in conjunction with a 3-D gantry system. A quick high level pass will be taken with data points sampled at relatively low frequency. From there a path or trajectory will be calculated that the camera will be swept along while collecting fine detail data. This two step process is used, as the system's "resolution" or distance sensitivity is optimal at a fixed distance from its target. Keeping the camera close to this distance on it's travel over the object ensures that the deltas from this fixed distance, sensed by the laser range finding system are of the highest level of accuracy. The hard drive interface requirements come from the massive amounts of high detail data (~2MBytes) that can be collected in a relatively short time which must be processed offline after the scanning is complete to build up a DXF model of the object being scanned.

Image:4.2 - (Advanced Vision System)

Board layout of advanced vision system (close-up).

The components outlined in yellow are those required for the vision system.

Image:4.3 shows the system in action. To the lower right are the laser and camera. The display shows a tract of three fingers that were wiggled back and forth while being pulled away from the camera. Notice that they get smaller as they get further away!

Image:4.3 - (System in Action)

Vision system up and running.  The camera & laser are visible on the right.

Now Let's move on to section 5 and look at some real world scenarios.

Section:5 (Applications)


The article, to this point, has covered design rules, implementation details and physical/geometric concerns governing the laser range finding vision system. What remains to be seen are some actual application/usage details so this section is here to give you a taste. This will be accomplished by presenting a pictorial review of an application and it's sensor returns.

Jumping right in, Image:5.1 delineates the actual physical setup used built to verify the graphical models used through out this article. At the bottom the (through the Plexiglas) the reader can see the laser and field generation optics. In the middle of the picture (angled down) the camera is visible. The multiple arcs of drill holes allow the sensor to be angled downward at 15deg increments at several different heights.


The next image (5.2) displays a close up image of a real world setup. Pictured here, 1/4" dia. PCB standoffs are arranged on some graph paper intersecting the field of laser light. The lines on the graph paper indicate the direction from which both the laser and camera were aimed.

Next, image:5.3 shows a screen shot of the sensor prototype. The angle from the image is different, but the five PCB's standoffs are clearly visible. Notice the great definition between these items and the backdrop.



G-Term software in zipped format.

Click for Link to:

G_Term.Zip (~195K)

One of the tools used in the development of the prototype version of this Laser Range Finding Vision System is a specialty communications program written by Alan Erickson. Alan developed an application, "G-Term", that functions as a terminal program taking data from the keyboard and sending it out the serial port of a PC and routing return data to the text window on the screen. The new twist that Alan has written into this code is that receipt of a special character turns on a graphics capture mode which graphs bytes following the key character as a line graph with options to display data points as endpoints. With this powerful tool, high level debugging became a snap. A screen shot of the application follows, thanks Alan!


Diagram:5.1 shows the data return values as plotted by G-Term. Again, the 5 PCB standoffs are clearly visible in the data. Applying the information presented in this article see if you can identify which pin is which between the sensor data above in diagram:5.1 & those depicted in image:5.2. Hint: The data in this diagram is reversed (left to right) from the image in the real world.

Section:6 (Errors/Artifacts)


Artifacts & Errors:

  • Reflection
  • Edge Effects
  • Radial Splay
  • Spherical Aberration
  • High Spotting
  • Ambient Lighting
  • Fanning
  • Axial cone miss-alignment
  • Obscuration
  • No Problem Found

This article would not be complete without a discussion of the artifacts and errors that contribute to the numerical inaccuracies of the values returned. Listing:5.1 overviews these errors and the rest of this section is dedicated to their explanation and solutions of their minimization.

Reflection:One of the easiest to understand, reflections are a major contributing source to erroneous readings in the system. To some extent, carefully choosing a thresholding value can be used to remove reflections created by low efficiency or partially reflective, shiny, media. In these cases the reflected energy could possibly be low enough to not penetrate the optical filter with enough signal strength to trip the laser detection threshold comparitor.

The two major categories of reflective problems fall into energy that is reflected onto another object/surface causing a false reading and energy reflected directly back into the sensor causing massive reading inconsistencies. In the second case an example could be imagined where a robot is approaching a Diet Coke can and getting steadily increasing values returned. At some point the laser light is reflected from the silver portion of the can directly into the camera flooding the image with laser light causing the sensor to report maximum readings across the span of the sensor.

Edge Effects:This effect is a little more complex to envision. Picture the case where a small robot is approaching a tennis shoe out in the middle of an open floor area. As the robot maneuvers around the edge of the tennis shoe it has a large number of high valued sensor returns from the tennis shoe with it's gently curving toe and which transition to extremely low level returns from the wall far in the distance. Edge effects then come into play with the aspect ratio of the laser to the target. Specifically as there is a large amount of laser energy on the close object in close proximity to that of faint returns from the far object this will tend (optically) to have the stronger returns bleed into the weaker area and slowly merge.

Radial Splay:Closely related to the two effects above, radial splay contributes to the errors returned by the sensor as well. This artifact is evidenced in two ways. First cheap optics on the laser end which leave what appear to be a "chroma" or lighted cloud around the actual intersection of the laser and it's target. The second major cause of this effect is dirty optics. Fingerprints on the optics will cause the light to defract ever so slightly and enlarge the contact area. Upon close inspection these artifacts have the visual appearance of fringe lines.


An extreme exaggeration of spherical aberration.

Spherical Aberration:Try to use inexpensive components in your project and you may pay the price. One of the big differences between a $500 scientific charge coupled camera and a $49 dollar version are the quality of the corrective optics used. Cheap cameras tend to use cheap plastic lenses that suffer greatly from spherical aberration. Diagram:6.1 illustrates an extreme exaggeration of the problem. Later under the future plans/in the pipe section.7 I'll discuss ways (look up tables or real time hardware) to correct this. Currently it remains uncorrected.

High Spotting:This artifact, familiar to those roboticists who use infra red sensors, is a form of the reflection problem on surfaces with high specularity. When the sensor is identically perpendicular to a surface, even one with highly diffuse properties, the visual properties of the laser light will tend to bleed out and return values that are a few readings off. One partial solution to this problem comes in limiting the laser's overall intensity so that its optical power is not as strong. As this is really a form of reflective problem, using other sensors to assure that the laser is never perpendicular to a large, flat target is another possible solution.

Ambient Lighting: A major source of error introduced is that of sunlight causing false readings. Sunlight as a form of ambient light, contains a quite large content of 671nm light. One reason for the high occurrence of 671nm light in nature is that this is the spectral line given off by the excitation of LI atoms. Sunlight striking the front of the camera is an absolute, 'No-No', and care must be taken to keep sunlight from striking the lens itself. (a small shield 1"x1" can easily accomplish this) Secondly, sunlight striking highly reflective objects and reflecting directly back into the lens/filter system can have many of the same effects. Quite often the reflected light is at a lower intensity level and careful threshold setting can remove some of the unwanted triggering, however this remains an ongoing problem. Detection of this artifact is quite simply provided by turning off the laser for one video frame and looking at the return values. If any signal is present what so ever this indicates an improper threshold setting or Ambient Lighting Level problems.

Special Note On Ambient Lighting: Many electronic devices contain red power LED's that have partial coverage in the 671nm range. Considered ambient lighting and easily identified by looking around the room, these can also affect the returned readings from the sensor if the robot happens to "look" directly at one.

Diagram:1.6 - (modified)

Error/Artifact - fanning.

Fanning:Fanning is moderately complex to understand but easy to compensate for and the existing design completely removes this potential problem. The complexity of explanation will drive towards the use of a diagram we've already used previously. Repeated here for ease of understanding, Diagram:1.6 (with slight modifications) depicts the fan out properties of the laser beam as it emanates from a single point source and is angularity spread to form a field. The artifact presents itself when a close object returns reading low in the image (think about the image looking through the screen which is turned sideways) but the laser field continues to fan out and has a second return further in the distance on the same video scan line. The two readings that would trigger on the same video scan line are identified with yellow arrows in the diagram. This is compensated for through careful design of the algorithm that locks out any further reading on a particular line after the first successful reading, until the end of that video line is reached. This has the added benefit of causing the sensor to only return readings from the closest objects on a particular vertical scan.

Diagram:6.2 - (top down isometric)

Error/Artiface - axial cone misalignment.

Axial cone miss-alignment: This error is a special form of Obscuration (see below) that comes into play due to geometric relationships imposed by axial mounting considerations between the camera and the laser field generator. Diagram:6.2 (right) illustrates the problem. Notice that because the laser is much closer to the object being sensed than the camera the angle of incidence rays are much steeper than the cone of vision around the object. This can best be seen by the areas on the back wall that have coverage by the camera and not the laser.

A special note on axial cone miss-alignment: In a more general sense this effect/error can occur any time the distance from the laser to an object is much less than the distance from the camera to that same object. Think about the case depicted in the examples of section:1. Since the camera is mounted high up on the robot (relatively so) as the robot approaches an object the distance between the laser and that object and the camera and that same object become exaggerated.

Obscuration: The text above (Section.1) mentions a problem that exists using this type of implementation that comes from an obscured cone of vision when used on a mobile robot in the physical world. The diagram shows an over simplified example that isn't hard to find in the real world, Sofa's, Over hung counters, etc...


Error/Artifact - obscuration.

No Problem Found:After reading the large list above of contributors to error build up in readings made by this sensor system one might be wondering how it works at all. The proof is in the results and the system built by the author works well around the house and office and even outside on grass, rocks etc. If you scan it across a computer monitor or Diet Coke can however erroneous readings will occasionally be returned by the system. As with any sensor system used in robotics once contributing errors are identified they can be engineered out of the solution or compensated for.

Section:7 (Changes In The Pipe)

When this project started, there were grandiose plans to implement a color follow on version that, in using a chip set like the SAA7111 by Phillips would also take color information into account. That plan has more recently been scrapped due to the extremely good performance of the system as it exists. There is however a little room for improvement on the existing design as outlined in this section:

Spherical aberration comes into play with cheap NTSC video cameras. Diagram:6.1 (above in the Errors/Artifacts) depicts the greatly exaggerated effects of this phenomena. In reality it is anything but uniformly spherical. Quite often (as is the case with the cameras used in this project) there is a large nearly Cartesian area in the middle of the image with degradation around the periphery of the image only. It is quite simple to generate a look up table through measurements taken with the system when it is up and running and to allow software to correct these errors. In a further implementation of this vision system I will take it a step further and move that look up table into the hardware. One reason for this is that the spanning space for value coverage of the table itself is 64Kbytes in size! Diagram:6.2 outlines the hardware implementation of this. The (256x256) address to the EEPROM is formed by the 8 bits that would have been the address to the original dual ported RAM, concatenated with the 8 bits that would have been the data to the dual ported RAM. This then generates 64K-bytes of unique addresses that form a spanning space of the image spatially. The output of the EEPROM is then routed back to the original connection of the dual ported RAM. These new connections are shown in green below and can be contrasted against the existing connection shown in diagram:4.1 above.


First planned system change: Automatic table lookup for error correlation.

The last upgrade to make will be to replace the temperature compensated threshold adjustment circuitry with a dedicated Digital to Analog converter. Addition of this CPLD controlled D-to-A converter empowers the laser range finder to automatically set it's own levels in response to changing environments. Specifically, this output voltage is fed into the thresholding comparison circuitry.

An example of the need for this feature is seen every year at the Seattle Robotics Society's various events. Sensors that worked perfectly at home on the bench or in the basement hobby room suddenly fail under quartz convention hall lighting. Under these conditions the ability of a sensor to auto calibrate are paramount. The additional components and connections to accomplish this are highlighted in diagram:7.3, green, below.


Second planned system change: Auto ranging exposure adjustment.

Of course there are additional levels of complexity that could be layered upon this design each giving an increasingly small delta return on performance with the trade off in power consumption and board space. One must weigh these returns carefully against one of the driving design goals (refer to listing:4.1) keeping the system size under 1.5" square. Remember that cost was no object as listed in the design criteria, however, size is. ;-)

Section:8 (Follow Up ...)

The aim of this article was to demonstrate that vision based sensors are within grasp of the amateur robotics practitioner. Certainly through sharing the base information and specifics of my implementation, I hope that I've accomplished this task. Although this level of technology is not for the absolute beginner it is not completely out of reach of the seasoned amateur and certainly not limited only too high end research labs.

Credit where credit is due: Special thanks goes out to Kevin Ross who inspired me to start this project a few years back. In a discussion after a Seattle Robotics Society meeting he explained to a small group the details of a similar system used on an early pathfinder prototype. From that initial verbal explanation, all of this work has flowed. -Thanks Kevin!


1.) Digital Television Mpeg-1, MPEG2 and principles of the DVB system, H Benoit, Publisher: Arnold 1997.

2.) Edmund Industrial Optics Catalog, Edmund Scientific, 2000.

3.) LM1881 Data Sheet, National Semiconductor, 1991.

4.) Sensors for Mobile Robots, Theory and Application, H.R. Everett, AK Peters, Ltd., 1995

5.) PID Controllers: Theory, Design, and Timing, K. Astrom and T. Hagglund, Instrument Society of America, 1995

6.) The Art of Control Engineering, Ken Dutton and Steve Thompson and Bill Barraclough, Addison-Wesley-Longman, 1997

7.) Photographic Filters Handbook, Eastman Kodak Company, 1990

8.) Circuit Cellar Ink, the Computer Applications Journal, Vol 1-#6, Nov/Dec 1988

9.) Webster's College Dictionary, Random House, 1990

For those of you who would like to read about more of my projects or see some of the work going on in my workshop, follow the link below to see some of my personal Robotics stuff. There are many new pictures there and several new additions to the Controllers section that has been largely expanded and reorganized for ease of navigation

Kenneth's Home Page


Drinkin' Diet Coke & Gettin' it done... Kenneth