Encoder Front Page
SRS Home | Front Page | Monthly Issue | Index
Google
Search WWW Search seattlerobotics.org

ABIONIN
Artificial BIOmorphic Neurocontrolled INsect

Design & Development

George Vastianos
Electronics Engineer BSc.
Dipl. from Electronics Department,
Faculty of Technological Applications,
Technological Educational Institute of Piraeus, Greece

www.vastianos.com

Table of contents

0. Abstract
1. The brain and the biological neurons
1.1 Biological and artificial neural networks
1.2 The mathematical model of the artificial neuron
1.3 Implementation and training of neural networks
2. Design and simulation of ABIONIN robot
2.1 Behaviors definition
2.2 Brain's neural network
3. Implementation of ABIONIN robot
3.1 Hardware implementation of brain's neural network
3.2 Robot's photo gallery
4. References

Keywords: Artificial intelligence & life, neuroscience, neural networks, biomorphs, mobile robotic systems, bio-robotics, micro-robotics.

0. Abstract

ABIONIN is a micro-robot that uses neural networks’ technology to mimic, in a general way, the living of a real-biological insect. ABIONIN’s brain is a neural network of 10 artificial neurons that all have been constructed using simple analog electronic components. ABIONIN is not more than a minimalistic robotic creature that is smart enough to live autonomously and survive in a dynamically changed environment.

1. The brain and the biological neurons

The brain is a collection of interconnected neurons. Neurons come in a variety of shapes and sizes, depending upon their function and specialised structures. However, in general, all neurons work in the same manner and resemble each other. Also, the neurons of the animals look almost exactly like the neurons of humans.

The average number of neurons in the human brain is about 100 billion. Even if human is the smartest creature on the planet, some interesting statistics about neurons in humans and other animals, showed that there are animals that may have a greater amount of neurons than the human. So, the average number of neurons in an octopus brain is about 300 billion!

Each neuron is a cell that uses biochemical reactions to receive, process and transmit information. A neuron's dendrite tree is connected to a thousand neighbouring neurons. When one of those neurons fires, a positive or negative charge is received by one of the dendrites. The strengths of all the received charges are added together through the processes of spatial and temporal summation. Spatial summation occurs when several weak signals are converted into a single large one, while temporal summation converts a rapid series of weak pulses from one source into one large signal. The aggregate input is then passed to the soma (cell body). The soma and the enclosed nucleus don't play a significant role in the processing of incoming and outgoing data. Their primary function is to perform the continuous maintenance required to keep the neuron functional. The part of the soma that does concern itself with the signal is the axon hillock.

If the aggregate input is greater than the axon hillock's threshold value, then the neuron fires, and an output signal is transmitted down the axon. The strength of the output is constant, regardless of whether the input was just above the threshold, or a hundred times as great. The output strength is unaffected by the many divisions in the axon; it reaches each terminal button with the same intensity it had at the axon hillock. This uniformity is critical in an analogue device such as a brain where small errors can snowball, and where error correction is more difficult than in a digital system.

Each terminal button is connected to other neurons across a small gap called a synapse. The physical and neurochemical characteristics of each synapse determine the strength and polarity of the new input signal. This is where the brain is the most flexible, and the most vulnerable. Changing the constitution of various neurotransmitter chemicals can increase or decrease the amount of stimulation that the firing axon imparts on the neighbouring dendrite. Altering the neurotransmitters can also change whether the stimulation is excitatory or inhibitory. Many drugs such as alcohol and LSD have dramatic effects on the production or destruction of these critical chemicals. The infamous nerve gas sarin can kill because it neutralises a chemical (acetylcholinesterase) that is normally responsible for the destruction of a neurotransmitter (acetylcholine). This means that once a neuron fires, it keeps on triggering all the neurons in the vicinity. One no longer has control over muscles, and suffocation ensues.

The complexity of the brain can only been displayed by numbers... So, the rate of neuron growth during development of a fetus (in the womb) is about 250,000 neurons/minute. The diameter of a neuron varies from 4 to 100 microns and the longest axon of a neuron can reach around 15 feet (Giraffe primary afferent axon from toe to neck). The velocity of a signal transmitted through a neuron varies from 1.2 to 250 miles/hour. Remember that these are averages because there is lots of variability in the nervous system!

One of the interesting facts, is that there are more nerve cells in the human brain than there are stars in the Milky Way! Also if we assume that one neuron is about 10 microns long and if we line up the 100 billion neurons of the human brain then the total length reaches 1000 km or about 600 miles! Remember that this is just an example, because neurons come in all different sizes.

1.1 Biological and artificial neural networks

A neural network can be either biological of artificial. A neural network is a network of many neurons that works as very simple "processors". The neurons are connected to each other through a (usually) complex "pattern of connectivity". A biological neural network can be found in a human's or an animal's brain. Artificial neural networks try to mimic the power and strength of biological neural networks. In an artificial neural network the connections between the neurons carry only numeric data (i.e. no images or sound).

Artificial neural networks are used to solve nonlinear problems that are encountered in control, signal, and image processing systems. The computational power of an artificial neural net lies in the parallel architecture and the neuron's nonlinear I/O characteristics.

1.2 The mathematical model of the artificial neuron

The neuron models are medium independent. They may be built out of organic materials, electrical components, lenses and mirrors, hydraulics or dominoes. As complicated as the biological neuron is, it may be simulated by a very simple mathematical model displayed in the following figure.

In the above figure, all signals (action-potentials) appear at the unit's inputs (synapses). The effect (PSP) each signal has may be approximated by multiplying the signal by some number or "weight" to indicate the strength of the synapse. The weighted signals are now summed to produce an overall unit "activation". If this activation exceeds a certain threshold theta the unit produces a an output response. This functionality is captured in the artificial neuron known as the Threshold Logic Unit (TLU) originally proposed by McCulloch and Pitts in 1943.

We suppose that there are n inputs with signals x1,x2,...xn and weights w1,w2,...wn. The signals take on the values `1' or `0' only. These signals are Boolean valued. (This allows their relation to digital logic circuits to be discussed). The activation a, is given by

[1]

This may be represented more compactly as

[2]

The output y is then given by thresholding the activation

[3]

The threshold theta will often be zero. The threshold function is sometimes called a step-function or hard-limiter for obvious reasons. If we are to push the analogy with real neurons, the presence of an action-potential is denoted by binary "1" and its absence by binary "0".

The signals dealt with so far (for both real and artificial neurons) take on only two values, that is they are binary signals. In the case of real neurons the two values are the action-potential voltage and the axon membrane resting potential. For the TLUs these were conveniently labelled "1" and "0" respectively. Now, it is generally accepted that, in real neurons, information is encoded in terms of the frequency of firing rather than merely the presence or absence of a pulse. There are two ways we can represent this in our artificial neurons. First, we may extend the signal range to be positive real numbers. This works fine at the input straight away, but the use of a step function limits the output signals to be binary. This may be overcome by "softening" the step-function to a continuous "squashing" function.

One convenient mathematical form for this is the sigmoid

[4]

Here p, determines the shape of the sigmoid: a larger value making the curve flatter. In many texts, this parameter is omitted so that it is implicitly assigned the value 1. The activation is still given by eqn. (1) but now the output is given by (4). Units with this functionality are sometimes called semilinear units. The threshold now corresponds to the activation which gives y=0.5. In (4) the threshold is zero and if we require a non-zero threshold then it must be included by writing

[5]

As an alternative to using real (continuous or analogue) signal values, we may emulate the real neuron and encode a signal as the frequency of the occurrence of a "1" in a pulse stream as shown below

Time is divided into discrete "slots". If the signal level we require is p, where 0<=p<=1, then the probability of a "1" appearing at each time slot will be p (If we require values in some other range then just normalize the signal first to the unit interval).

[6]

The output y, is now interpreted as the probability of outputting a "1" rather than directly as an analogue signal value. Such units are sometimes known as stochastic semilinear units. If we don't know p an estimate may be made by counting the number of "1"s, N1, in N time slots. The probability estimate p' is given by p'=N1/N. In the stochastic case it is possible to reinterpret the sigmoid in a more direct way. First note that it is an approximation to the cumulative gaussian (normal distribution, cf z-scores in statistics). If we had, in fact used the latter then this is equivalent to modelling a `noisy' threshold; that is the threshold at any time is a random variable with gaussian (normal) distribution.

Thus, the probability of firing (outputting a "1") if the activation is a, is just the probability that the threshold is less than a, which is just the cumulative of the gaussian up to this value.

The artificial neurons discussed so far all evaluate their activation and output "instantaneously" - there is no integration of signals over time. To introduce dependence on time, we define the activation implicitly by its rate of change da/dt. This requires that the weighted sum of inputs be denoted by some other quantity. Thus, put

[7]

and now put

[8]

where alpha and beta are positive constants. The first term gives rise to activation decay, while the second represents input from the other neurons and may be excitatory. To see the effect of the decay term put s=0. There are now two cases.

i) a>0. Then da/dt<0, that is a decreases.

ii) a<0. Then da/dt>0, that is a increases.

Below you see the decay of activation from positive and negative values.

1.3 Implementation and training of neural networks

A neuron can have any number of inputs; neurons in the brain can have as many as a thousand inputs. Each neuron also has a threshold value. If the sum of all the weights of all active inputs is greater than the threshold, then the neuron is active. For example, consider the case where both inputs are active. The sum of the input's weights is 0. Since 0 is smaller than 0.5, the neuron is off. The only condition which would activate this neuron is if the top input were active and bottom one were inactive. This single neuron and its input weighting performs the logical expression A and not B.

There is a variation on this model that sets the threshold to 0 on all neurons, and adds an extra input that is always active. The extra input is weighted to account for the missing threshold and called "Bias neuron". The two models are mathematically identical. The advantage of the second version is that it simplifies the math involved in automatic learning and implementation, since there is only one type of variable to keep track of. Both of these simple models accurately simulate the most important aspects of the biological neuron, though they do leave out some features such as temporal summation. A more complicated model could easily account for these, but for most requirements the simple models suffice.

Since neurons are functionally complete, this means that in addition to logic, they are also capable of storing and retrieving data from "memory". A neural network can store data in two formats. Permanent data (long term memory) may be designed into the weightings of each neuron. An example of this is the self-teaching network that will be discussed later. Temporary data (short term memory) can be actively circulated in a loop, until it is needed again.

In this example, activating the top input briefly will activate the neuron. Since the output of the neuron feeds back to itself, there is a self-sustaining loop that keeps the neuron firing even when the top input is no longer active. Activating the lower input suppresses the looped input, and the node stops firing. The stored binary bit is continuously accessible by looking at the output. This configuration is called a latch. While it works perfectly in this model, a biological neuron would not behave quite this way. After firing, a biological neuron has to rest for a thousandth of a second before it can fire again. Thus one would have to link several neurons together in a duty-cycle chain to achieve the same result.

Neural networks can be explicitly programmed to perform a task by manually creating the topology and then setting the weights of each link and threshold. However, this by-passes one of the unique strengths of neural nets: the ability to program themselves. The most basic method of training a neural network is trial and error. If the network isn't behaving the way it should, change the weighting of a random link by a random amount. If the accuracy of the network declines, undo the change and make a different one. It takes time, but the trial and error method does produce results.

The above neural network is a simple one that could be constructed using such a trial and error method. The task is to mirror the status of the input row onto the output row. To do this it would have to invent a binary to decimal encoding and decoding scheme with which it could pass the information through the bottle-neck of the two neurons in the centre.

Unfortunately, the number of possible weightings rises exponentially as one adds new neurons, making large general-purpose neural nets impossible to construct using trial and error methods. In the early 1980s two researchers, David Rumelhart and David Parker, independently rediscovered an old calculus-based learning algorithm. The back-propagation algorithm compares the result that was obtained with the result that was expected. It then uses this information to systematically modify the weights throughout the neural network. This training takes only a fraction of the time that trial and error method take. It can also be reliably used to train networks on only a portion of the data, since it makes inferences. The resulting networks are often correctly configured to answer problems that they have never been specifically trained on. As useful as back-propagation is, there are often easier ways to train a network. For specific-purpose networks, such as pattern recognition, the operation of the network and the training method are relatively easy to observe even though the networks might have hundreds of neurons...

2. Design and simulation of ABIONIN robot

The prime goal of this project was the creation of an artificial insect that will be "biomorphic" (from the Greek word biology "to live" and morphology "to take from"). Because of this, the design of this robot focused to the external shape and the definition of its behaviors that will make it to mimic (in a basic way) a real insect.

Below you will find a well described analysis about the presented robotic insect's behaviors and the neural network that implements these behaviors in practice.

2.1 Behaviors definition

The behavior set that helps this robot to mimic a real insect consists of two basic behaviors. The first behavior describes the relation of the insect with the light. In nature we can find many types of insects with different types of "light taste". Other insects like the light and seek for it and other insects hate it and run away from it...

ABIONIN has a "phototropic" behavior. Phototropic is something that seeks for light. The term is used in Biology to describe animalcules that seek the light. The way that ABIONIN expresses its phototropic behavior described through the following picture:

The second basic behavior of this artificial insect is to protect its self. This protection may be achieved by avoiding any "object" that will appear within the insect's environment and under special circumstances may be proved harmful for the insect. Of course the object avoidance behavior must have the highest priority between the two behaviors. In the following picture you can see how ABIONIN reacts by using these two behaviors while it senses the environment through its eyes and whiskers:

2.2 Brain's neural network

The above-described behaviors implemented very well with a small neural network that works as the "brain" for this artificial insect. This neural network consists of 10 main neurons and one bias neuron (that is used for setting the threshold and is invisible). The six of the main neurons are used for the sensors and the actuators and only the last four do all the "intelligence" work. Below you see the brain’s neural network with the 10 main neurons:

The software that has been used for the development and simulation of the above neural network is the "BrainBox" version 1.8 by Fraser N. The neural network’s netlist is the following:

BrainBox version:            1.8 
Total Number of Neurons:     11 

"Bias Neuron (invisible) #1"
"01"
"+1"
00, 00
0 

"Left Eye Input Neuron #2"
"01 02"
"+0 +1"
01, 04
0 

"Right Eye Input Neuron #3"
"01 03"
"+0 +1"
01, 06
0 

"L<R Luminance Compare Neuron #4"
"01 02 03"
"+0 -2 +1"
04, 04
0 

"L>R Luminance Compare Neuron #5"
"01 02 03"
"+0 +1 -2"
04, 06
0 

"Left Whisker Input Neuron #6"
"01 06"
"+0 +1"
01, 02
0 

"Right Whisker Input Neuron #7"
"01 07"
"+0 +1"
01, 08
0 

"Left Leg Control Neuron #8"
"01 04 07"
"+0 +1 -2"
07, 08
0 

"Right Leg Control Neuron #9"
"01 05 06"
"+0 +1 -2"
07, 02
0 

"Left Leg Output Neuron #10"
"01 08"
"+0 +1"
10, 02
0 

"Right Leg Output Neuron #11"
"01 09"
"+0 +1"
10, 08
0 

The format of the above netlist file is quite simple. The first two lines inform the software about the version of the Brainbox software that used for the generation of this netlist and the total number of neurons. All the rest lines include data about each neuron. Each neuron uses five lines for its data. The first includes remarks for the identity of the neuron. The second line includes the list of the other neuron numbers that the current neuron is connected to. The third line includes all the weights for each neuron connection. The fourth line includes the coordinates of the current neuron on the Brainbox window and fifth line defines the initial logic state of the current neuron when the simulation process starts. Each five-line group is separated from the other five-line groups (that include info about the other neurons of the net) with an empty line.

An example of how this neural network works included in the following pictures. At the beginning the network starts with all the neurons inactivated (1st picture). When the left eye input neuron receives luminance grater than the luminance of the right eye input neuron then the L>R compare neuron is been activated (2nd and 3rd picture).

The activated L>R compare neuron drives and activates the right leg control neuron because the left whisker neuron is inactivated (4th picture). The activation of the right leg control neuron causes the activation of the right leg neuron (5th picture).

In case that the left whisker neuron is activated then this neuron inhibits the activation of the right leg control neuron and this causes the inactivation of the right leg neuron (6th to 8th picture).

3. Implementation of ABIONIN robot

3.1 Hardware implementation of brain's neural network

Finally the hardware implementation of the above-described neural network achieved by using a variety of different electronic components and techniques. Phototransistors have been used as eye sensors, switches as whisker sensors and small dc motors as leg actuators. The electronic circuit that implements the neural network is the following:

PARTDESCRIPTION
PTR1,2VPT50A3
TR1,2BC556
IC1LM339
R1,20R5/0,5W
R320K TRIMMER
M1,2DC MOTORS (MABUCHI RF-300C-14270)
SW1SPST SWITCH
SW2,3SPDT SWITCHES
BT1,21,5V AAA BATTERIES

The following table gives a correlation between the neurons of the network and the electronic part(s) that used to implement each neuron.

NEURON NAMEELECTRONIC PART(S)
Right Eye Input NeuronPTR1
Left Eye Input NeuronPTR2
L<R Luminance Compare NeuronIC1A
L>R Luminance Compare NeuronIC1B
Right Whisker Input NeuronSW2
Left Whisker Input NeuronSW3
Right Leg Control NeuronSW2
Left Leg Control NeuronSW3
Right Leg Output NeuronTR1, M1
Left Leg Output NeuronTR2, M2

All the "intelligence" of the circuit is hidden behind the op-amps IC1A,B that work as comparators to implement the phototropic behavior and the switches SW2,3 that inhibit the operation of the motors to implement the object avoidance behavior. The resistor R3 is used to calibrate the robot to track straight and true towards light sources. And the resistors R1,2 are used to decrease the current of the motors (and their speed) and make the move of the robot more linear.

The mechanical construction of this artificial insect based on the head of a destroyed climbatron robot that used as shell and on some paper clips that used for the construction of the six fake legs.

3.2 Robot's photo gallery

Front view Side view Top view
Bottom view Brain view Ready for feeding...

4. References

  1. Vastianos G., "ABIONIN - Artificial BIOmorphic Neurocontrolled INsect", (Draft Notes), (11/2001).
  2. Fraser N., "Introduction to neural networks", (21/09/1998).
  3. Kinser P.A., "Brains on the web: Comparative Neuroanatomy and Intelligence", Haverford College, (12/2000).
  4. Gurney K., "Computers and Symbols versus Nets and Neurons", Psychology Department, University of Sheffield.
  5. Grossberg S., "Adaptive pattern classification and universal recoding: 1. parallel development and coding of neural feature detectors", Biological Cybernetics, 23:121 - 134, (1976).
  6. Hopfield J. J. and Tank D. W., "Computing with neural circuits: A model", Science, 233:625 - 633, (1986).
  7. McCulloch W. and Pitts W., "A logical calculus of the ideas immanent in nervous activity", Bulletin of Mathematical Biophysics, 7:115 - 133, (1943).
  8. Von der Malsburg C., "Self-organisation of orientation sensitive cells in the striate cortex", Kybernetik, 14:85 - 100, (1973).
  9. Nilsson N. J., "Introduction to Machine Learning: 4. Neural Networks", (1996).