An Improved Retraining Scheme for Convolutional Neural Network

Authors

  • A.R. Syafeeza
  • M.Khalil Hani
  • N.M Saad
  • F. Salehuddin
  • Hamid N.A

Keywords:

Convolutional Neural Network, winner-takes-all-approach, multilayer perceptron, neural network

Abstract

A feed-forward neural network artificial model, or multilayer perceptron (MLP), learns input samples adaptively and solves non-linear problems for data that are noisy and imprecise. Another variant of MLP, known as Convolutional Neural Network (CNN) has additional features such as weight sharing, local receptive field, and subsampling, making CNN superior in handling challenging pattern-recognition tasks. Although CNN has improved the performance of MLP, the complexity of its structure has caused retraining processes to become inefficient whenever new categories or neurons using a winner-takes-all approach are added at the classifier stage. Thus, it is necessary to retrain the complete network set when new categories are added to the network. However, such a retraining incurs additional cost and training time. In this paper, we propose a retraining scheme that could overcome the mentioned problem. The proposed retraining scheme generalizes the feature of extraction layers, hence the retraining process only involves the last two layers instead of the whole network. The design was evaluated on AT&T and JAFFE databases. The results obtained have proved that training an additional category is approximately more than 70 times faster than retraining the whole network architecture

Downloads

Download data is not yet available.

Downloads

How to Cite

Syafeeza, A., Hani, M., Saad, N., Salehuddin, F., & N.A, H. (2015). An Improved Retraining Scheme for Convolutional Neural Network. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 7(1), 5–9. Retrieved from https://jtec.utem.edu.my/jtec/article/view/486

Issue

Section

Articles

Most read articles by the same author(s)

<< < 1 2 3 > >>