Tracking evolution of parameters [Post #5, Day 4]

So today I'm focusing on plotting the evolution of parameters within my MLP NN for my N to 2N mapping application. I wanted to see how the weights and biases evolve with each epoch, and for more fine-grained – within each sample iteration within each epoch, and I want to see how the loss evolves. One of my questions yesterday was what is the point of plotting validation loss with each epoch. I understand it now – within each epoch, we also run a validation and compute the loss (the training and validation data subsets are still kept separate), this allows us to then track this with each epoch. I learned that if the training loss is decreasing but the validation loss is increasing, my model might be overfitting. I don't know why this is the case at this point, but I'll just take it as a standard rule for now until I think more about it. If both losses decrease together, my model is learning well. If both losses are high, my model might be underfitting. Claude also mentioned that this can help me understand if my model is generalizing well to new data or just memorizing the training data.

image

image

I also read a bit about the structure and function of the human brain today. This is very interesting to me, especially how artificial neural networks represent specific physical functions and processes of the human brain. I want to have a deep understanding of this.

More from A Civil Engineer to AI Software Engineer 🤖
All posts