Method description

It can be assumed that the input data is represented by a set of n-dimensional vectors . The output is a two dimensional array of output nodes. This array with output nodes becomes the data map in the learning process. Every node (neuron) is represented by an n-dimensional vector of weights.

Figure 36.1. The structure of SOM

The structure of SOM

These are the Self Organizing Maps. The dedicated learning algorithm was proposed by Teuvo Kohonen.

For each learning vector :

  1. the neuron nearest to the input learning vector is located. This neuron is called the winner ( ):

  2. The winner is assigned all the neurons which are in a neighborhood relation with the winner. The set of all these neurons is called the neighborhood.

  3. The winner's vector of weights is updated as follows:

    where is the learning rate.

  4. Next the vectors of weights from the winner's neighborhood are updated according to the formula:

    where is a function which calculates the modification of the learning rate for the neighborhood: a closer neighborhood should learn more than a more distant one.

    Figure 36.2. The winner node in an SOM.

    The winner node in an SOM.

Each learning vector is used once in each iteration. In the subsequent iterations the used neighborhood should be shrunk and the learning rate decreased.

AdvancedMiner implements three LVQ algorithms: LVQ, LVQ21 and LVQ3: