Error-Correcting Perceptron Learning
- Uses a McCulloch-Pitt neuron
- Unity increment
w(n+1)=w(n) if wTx(n)>0 and x(n) belongs to class c1w(n+1)=w(n) if wTx(n)≤0 and x(n) belongs to class c2w(n+1)=w(n)−η(n)x(n) if wTx(n)>0 and x(n) belongs to class c2w(n+1)=w(n)+η(n)x(n) if wTx(n)≤0 and x(n) belongs to class c1- Initialisation. Set w(0)=0. perform the following computations for
time step n=1,2,... - Activation. At time step n, activate the perceptron by applying continuous-valued input vector x(n) and desired response d(n).
- Computation of Actual Response. Compute the actual response of the perceptron:
y(n)=sgn[wT(n)x(n)]
where sgn(⋅) is the signum function. - Adaptation of Weight Vector. Update the weight vector of the perceptron:
w(n+1)=w(n)+η[d(n)−y(n)]x(n) where
d(n)={+1−1if x(n) belongs to class c1if x(n) belongs to class c2 - Continuation. Increment time step n by one and go back to step 2.
- Guarantees convergence provided
- Patterns are linearly separable
- Non-overlapping classes
- Linear separation boundary
- Learning rate not too high
- Two conflicting requirements
- Averaging of past inputs to provide stable weight estimates
- Fast adaptation with respect to real changes in the underlying distribution of process responsible for x
