An Improved Retraining Scheme for Convolutional Neural Network
Keywords:
Convolutional Neural Network, winner-takes-all-approach, multilayer perceptron, neural networkAbstract
A feed-forward neural network artificial model, or multilayer perceptron (MLP), learns input samples adaptively and solves non-linear problems for data that are noisy and imprecise. Another variant of MLP, known as Convolutional Neural Network (CNN) has additional features such as weight sharing, local receptive field, and subsampling, making CNN superior in handling challenging pattern-recognition tasks. Although CNN has improved the performance of MLP, the complexity of its structure has caused retraining processes to become inefficient whenever new categories or neurons using a winner-takes-all approach are added at the classifier stage. Thus, it is necessary to retrain the complete network set when new categories are added to the network. However, such a retraining incurs additional cost and training time. In this paper, we propose a retraining scheme that could overcome the mentioned problem. The proposed retraining scheme generalizes the feature of extraction layers, hence the retraining process only involves the last two layers instead of the whole network. The design was evaluated on AT&T and JAFFE databases. The results obtained have proved that training an additional category is approximately more than 70 times faster than retraining the whole network architectureDownloads
Downloads
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)