Data Science & Deep Learning: Algorithmic Bias in Neural Networks: From Parameters to Function Space

Daniel Soudry (EE, Technion)
Monday, 4.11.2019, 12:30
Taub 301 Taub Bld.

In many common datasets, neural networks can achieve zero training loss yet generalize well to unseen data. Recent works suggest that this because standard training algorithms (e.g., GD or SGD) have an implicit regularization which is biased towards specific solutions, which tend to have good generalization. I will review such "algorithmic biases", how they affect the functional capabilities of the neural networks, and how this relates to generalization in simple models.

Back to the index of events