BPN.class

XOR applet is a little example that implements a 3-input XOR gate with a 3 layered neuronal network ( BPN.class ).

Wait until a window appear in your screen. When You got the window ...

You'll see a 'table' performing a completely wrong 3-input XOR gate, followed by learning performances in time. The system is learned in 10000 sessions, every 1000 sessions absolute error and relative learning rate is printed.
Finally at the end our trained network is checked, the 'table' is reprinted and this should be the right logic output.

InputOutput
FFFF
FFTT
FTFT
FTTF
TFFT
TFTF
TTFF
TTTT

Now You can save your well-trained network in a file (this not work on Netscape, for security protection Filebrowser will not appear) and use it for your private application with the help of BPN.class .
You can decide to learn the network at every moment, click Start Learning again and the train will continue from the last state You left it.

I made this simple program just to test BPN.class and provide a tutorial for everybody need such of little networks. XOR is just a little example, but I used a very similar network to implement an OCR (in C) or a simple reactive insect brain. It may be very polyuseful.

If You decide to use this class, please, notice me. Depending on the request I may want to redesign it better and add some features.

Enjoy ! And let me know about bugs, thanks and questions :-)


o excuses, ou are the th visitor.
More applets here


   public class BPN {
     

Fields

public static final double FIRE = 0.999; public static final double NEUTRAL = 0.0; public static final double DOWN = -0.999;

Variables

public double absoluteError; // updated when learning. public double inpA[]; // input activations (range: DOWN .. FIRE) public double outA[]; // output activations (range: DOWN .. FIRE)

Constructor

public BPN(int i, int h, int o, double eida, double theta, double elast, double momentum); i := # of neurons at input layer h := # of neurons at hidden layer o := # of neurons at output layer eida := starting learning rate theta := sigmoid thresold value (range: DOWN .. FIRE) elast := sigmoid elasticity ( 1.0 .. 2.0) momentum := learning rate attenuator (range 0.1 .. 1.0, keep it very high ~ 0.99)

Methods

public void init() Init the network and set random values to neurons and weigths. NOTE: init() is automatically called at construction method. public void propagate(double[] vector) apply vector[] at input and propagate it to get values in outA[]. Throws ArrayIndexOutOfBoundsException when vector length don't match input layer length. public void learnVector(double[] in, double[] out) apply in[] at input and learn the network to produce out[]. Throws ArrayIndexOutOfBoundsException when in[] length don't match input layer length or out[] length dont't match output layer length. public boolean saveNeuro(String path, String name); Save size of the network, weigths and sigmoid parmeters (eida, theta, elast, momentum) in a file specified by path and file. Return true if succesful. public boolean loadNeuro(String path, String name); Load and setup the network with the data in specified file. Return true if succesful. };