MLP

mlp-arch

Universal Approximation Theory

A finite feedforward MLP with 1 hidden layer can in theory approximate any mathematical function

  • In practice not trainable with BP

activation-function mlp-arch-diagram

Weight Matrix

  • Use matrix multiplication for layer output
  • TLU is hard limiter tlu
  • o1o_1 to o4o_4 must all be one to overcome -3.5 bias and force output to 1 mlp-non-linear-decision
  • Can generate a non-linear decision boundary