# Hebbian Learning - Understanding Simultaneous Firing

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm beginning to write a neural network simulator in Java and thinking of Hebbian Learning but I'm stuck at one thing:

What causes two neurons to fire at the same interval while only one of them is input and the other is not? Does it fire itself?

Hebbian Learning rule says "those fire together will wire together" but since I'm activating only input neuron, what makes the other interior neuron fire? Do I need to make them fire randomly at times instead of just using weights matrix $$W_{ij}$$?

I'm a fan of unsupervised learning and need to start with a simple case such as I give one of 3-5 neurons a "1" and expect from another one a "0" as a "NOT" operator.

You basically have 2 options:

1. Manually fire both neurons together that you want to pair - do this as many times as needed to pair them. After learning, it should be sufficient to fire only one neuron for the second neuron to fire as desired.
2. Assign a starting weight to the connection between the neurons such that firing one will trigger the other. This is basically shortcutting the above learning process.

Normally, the learning process in neural networks is based on multiple simultaneous inputs - not just one. The network learns the statistical probability of input neurons firing together, and is eventually able to predict them without them firing. So a test case of a single neuron firing can be used after learning is complete (the first option) or when you restore a previously saved matrix of a trained network (the second option).

Probably you already know most of the stuff i will talk about. But I want to make it clear any way.

First from the nonscientific perspective of hebian learning

I think when there are just 2 neurons and you want to "wire them together" i suspect it wouldn't be learning anything. In the and you may have a post-synaptic neuron fires whenever the pre-synaptic fires.

But when there are at least 2 pre-synaptic neurons it begins to make sense. (And you may avoid giving random numbers)

Long Term Potantiation (LTP) is usually discussed when hebian learning in process. To make a clear example lets assume 3 pre-synaptic neurons one of then (S1) has a stronger connection (weight) to post synaptic neuron while other two (W1 and W2) has a weak connection(weight).

To make it more concrete i will give (neuroscientifically non-valid) meanings to these neurons. Lets say post-synaptic neuron is a neuron that recognized motorbikes. S1 fires when you see a bike, W1 fires when there is a motor sound and W2 (lets make it arbitrary) fires when it smells cherries.

In the begining you don't have the idea how a motorbike sounds like. But when you see it S1 fires, and because you hear it W1 also fires. However W1's contribution is very weak S1 may produce enough inout for post-synaptic neuron to fire. Since W1 was firing when post-synaptic neuron is firing the connection is strengthened. Anf if you have enouh inputs after some points even without the presence of S1 post-synaptic neuron can fire. Since it didn't smell cheries when you say the motorbike W2 remained still the same.

So the message to take home is it is meaningful when there are multiple pre-synaptic inputs and the effect is to one side only.

I am quoting from "Cognitive Neuroscience" book of Gazzangia

three rules for associative LTP have bee n drawn:

1. Cooperativity . More than one input must be active at the same time.
2. Associativity . Weak inputs are potentiated when co-occurring with stronger inputs.
3. Specificity . Only the stimulated synapse shows potentiation.

From the perspective of machine learning:

For unsupervised learning in neural networks you may want to look at:

• Boltzmann Machines
• Stochastic Maximum Likehood Learning