A Good and Rather Complete Notation for ML in Neuralnet
July 4, 2023•193 words
[Old note, see Cap AI/ML for updates]
Notations from ‘a’ to ‘z’ may be used with ML in neuralnet:
a: Some constant
b: [B]ias
c: [C]oncatenation in middle of network
d: [D]ot product
e: [E]rror aka loss, at loss node
f: Activation [f]unction
g: [G]radient, of loss, not for gradient of f
h: [H]idden layer output
i: Iterator variable
j: Iterator variable
k: Iterator variable
l: -- Not used, confusing with number 1 --
m: Number of layers
n: Number of neurons in output layer
o: -- Not used, confusing with number 0 --
p: Probability value P
q: probability value Q
r: Learning [r]ate
s: [S]ubtraction, U-Y (out-true), aka delta
t: Derivative of activation function (looks inverted of 'f', just easy to remember)
u: O[u]tput of output layer
v: Gradient intermediate value, during backpropagation
w: Weights
x: Input, 1 sample or 1 batch
y: Expected, aka label or true output, 1 sample or 1 batch
z: Latent vector
fe: Loss function
te: Gradient of loss function
ge: Gradient value returned by 'te',
separate value for each output node.
inp: Input values, whole training set
exp: Expected results, whole training set