An Analysis of Regularization Methods in Deep Neural Networks

dc.contributor.author Badola, Akshay
dc.contributor.author Nair, Vineet Padmanabhan
dc.contributor.author Lal, Rajendra Prasad
dc.date.accessioned 2022-03-27T05:51:07Z
dc.date.available 2022-03-27T05:51:07Z
dc.date.issued 2020-12-10
dc.description.abstract Regularization in Deep Neural Networks for Classification has developed into a separate paradigm as that involves regularization in probability spaces. A big contribution to avoid overfitting in Deep Learning has been Dropout [1]. Dropout however is rarely applied alone for classification tasks and is usually used in conjunction with several other techniques like weight normalization which is equivalent to l2 norm or batch normalization [2]. The use of these techniques is empirical in nature and often ad hoc and thus, it's difficult to estimate the contribution of each of these techniques to the final outcome. Here we isolate each of the common regularization techniques and use a standard Deep Convolutional Network VGG11 [3] and a standard dataset CIFAR10 [4]. We collect and analyze the results to identify the effect of each of these techniques.
dc.identifier.citation 2020 IEEE 17th India Council International Conference, INDICON 2020
dc.identifier.uri 10.1109/INDICON49873.2020.9342192
dc.identifier.uri https://ieeexplore.ieee.org/document/9342192/
dc.identifier.uri https://dspace.uohyd.ac.in/handle/1/8329
dc.subject Deep Neural Networks
dc.subject Dropout
dc.subject Regularization
dc.title An Analysis of Regularization Methods in Deep Neural Networks
dc.type Conference Proceeding. Conference Paper
dspace.entity.type
Files
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description: