Visualising Supervised Learning Neural Networks

Page 1

Understanding Neural Networks Pseudocode

Step 1. Particle Spring System

LOAD DATA

PARTICLE CLASS

POPULATION CLASS

COMPUTE POPULATION

LIVE POPULATION

RENDER POPULATION

We import and convert the images from Binary and store them in an array of a length equal to the amount of particles desired. The particle class contains the position (PVector), velocity (PVector), the digit it represents (int) and the associated colour (color) of the particle. It is initialised with a random position within a small area at the centre of our screen and random velocity ranging from -1 to +1.

The particles randomly emerging from the centre of the screen

The population class contains only an array of Particle objects. This step consists of three main functions. center(), which forces all the particles to stay in the center of screen. Next is react(), which calculates the spring forces exerted on a particle, and on the distance between the two images. This is calculated in a function called getLength() which iterates through all 196 pixels of A and B and calculates the difference between Ai and Bi. Finally, avoid() which prevents the particles from colliding with each other it works in a similar but opposite way than center(). To further improve the visualisation the repelling strength of a particle is also based on the getLength(), the higher the distance the stronger it is repelled from that particle.

Hooke’s Spring Law

live() is a function within Population() which very simply adds velocity to position. As a note, it is important that compute and live are in two separate for loops. render() resides in Population() which contains a for loop calling the function display() inside Particle() which draws a circle at the positions of each particle. displaySprings() draws the connectivity between each particle of the same digit when their distance is smaller than x.

Step 2. Neural Network

The Network LOAD DATA

We convert the images from Binary and store them in two separate arrays, testing_set which contains 2,000 test samples and trainning_set which contains 8,000 images. These are loaded in setup() using the loadData() function which fills them with objects of the class Card(). To minimise computational power we create a look-up table of 200 values using the sigmoid() function.

NEURON CLASS

Considering the feed forward nature of this algorithm the neuron class must be initialised in two ways. With no constructor for out input layer and with the previous layer for the hidden and output layer. Therefore this class should store the previous layer (m_inputs[196]), the corresponding weights ([]m_weights[196]), the output (m_output) and the overall error (m_error).

NETWORK CLASS

The network contains all our layers, m_input_layer [196] , m_hidden_layer[49] and m_output_layer[10] which equals the number of answers, from 0 to 9.

Sigmoid Function LEARN

OR TEST

learn() resides in Network(), it feeds through our layers a randomly selected image from the set. Then based on the performance of each neuron (setError()) we adjust the weights. Making the change more or less drastic using the sigmoid function. test() has the same procedure than learn() but we do not train it from its mistake.

The Visualisation The modules listed below are non-sequential and should be interpreted as single units.

PERFORMANCE

This plots the performance of training and testing of our network over time. The red line represents the ratio of failure to success of training, the black line counts the successes and failures in training adding or subtracting one point accordingly. The blue line behaves just as the red but monitoring testing.

SIGMOID

We draw the sigmoid curve, and plot every neuron inside the hidden layer along it.

WEIGHTS

Here we display all the weights of either the hidden or output layer. We also allow for the activity of each to be displayed.

CONNECTIVITY

We draw a 1:1 representation of the network as well as all its connections. Whilst testing we also highlight neuron activation.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
Visualising Supervised Learning Neural Networks by Charles Fried - Issuu