How deeply to fine-tune a convolutional neural network: A case study using a histopathology dataset

Research output: Contribution to journalArticle

4 Citations (Scopus)
3 Downloads (Pure)


Accurate classification of medical images is of great importance for correct disease diagnosis. The automation of medical image classification is of great necessity because it can provide a second opinion or even a better classification in case of a shortage of experienced medical staff. Convolutional neural networks (CNN) were introduced to improve the image classification domain by eliminating the need to manually select which features to use to classify images. Training CNN from scratch requires very large annotated datasets that are scarce in the medical field. Transfer learning of CNN weights from another large non-medical dataset can help overcome the problem of medical image scarcity. Transfer learning consists of fine-tuning CNN layers to suit the new dataset. The main questions when using transfer learning are how deeply to fine-tune the network and what difference in generalization that will make. In this paper, all of the experiments were done on two histopathology datasets using three state-of-the-art architectures to systematically study the effect of block-wise fine-tuning of CNN. Results show that fine-tuning the entire network is not always the best option; especially for shallow networks, alternatively fine-tuning the top blocks can save both time and computational power and produce more robust classifiers.

Original languageEnglish
Article number3359
Number of pages20
JournalApplied Sciences (Switzerland)
Issue number10
Publication statusPublished - 12 May 2020


  • Convolutional neural network
  • Deep learning
  • Fine-tuning
  • Image classification
  • Medical images
  • Transfer learning

Fingerprint Dive into the research topics of 'How deeply to fine-tune a convolutional neural network: A case study using a histopathology dataset'. Together they form a unique fingerprint.

Cite this