The Most Basic Toy Problems for Neural Network

The most and top most basic one:

  • A+B=?
  • Solution: A single neuron, 2 weights, 1 bias, identity activation
  • Training result: The 2 weights reach 1, bias goes zero

Other basic toy problems:

  • AND,OR: A single neuron can solve, 1 separation line
  • XOR: A single layer of 2 neurons can solve with 2 separation lines, but it needs a single-value output, so put it 2 layers of 2 neurons then 1 neuron respectively.

Theory notes:

  • 1 neuron makes 1 separation line
  • 1 layer makes 1 separation polyline
  • N layers make N separation polylines
  • Separation is done by weights and bias with dot product
  • Separation is NOT done by the activation function, the activation function is for limiting output, the limit is to condense the value flow after every step (every layer).
    • Identity activation: No limit
    • ReLU activation: Limits 1 lower bound, fast
    • Sigmoid activation: Limits both lower bound and upper bound, slow
  • Summarisation: Is the learning process of neuron network
  • Generalisation: Is how the network can adapt to similar unlearnt samples

You'll only receive email when they publish something new.

More from 19411
All posts