An Analysis of Regularization Methods in Deep Neural Networks

No Thumbnail Available
Date
2020-12-10
Authors
Badola, Akshay
Nair, Vineet Padmanabhan
Lal, Rajendra Prasad
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Regularization in Deep Neural Networks for Classification has developed into a separate paradigm as that involves regularization in probability spaces. A big contribution to avoid overfitting in Deep Learning has been Dropout [1]. Dropout however is rarely applied alone for classification tasks and is usually used in conjunction with several other techniques like weight normalization which is equivalent to l2 norm or batch normalization [2]. The use of these techniques is empirical in nature and often ad hoc and thus, it's difficult to estimate the contribution of each of these techniques to the final outcome. Here we isolate each of the common regularization techniques and use a standard Deep Convolutional Network VGG11 [3] and a standard dataset CIFAR10 [4]. We collect and analyze the results to identify the effect of each of these techniques.
Description
Keywords
Deep Neural Networks, Dropout, Regularization
Citation
2020 IEEE 17th India Council International Conference, INDICON 2020