作者
Houda Bichri, Adil Chergui, Mustapha Hain
发表日期
2023/1/1
期刊
Procedia Computer Science
卷号
220
页码范围
48-54
出版商
Elsevier
简介
Training a deep neural network is an expensive work because it is a time-consuming task and requires high computational power and usually a lot of dataset is needed to train a neural network which is not always available. These problems can be avoided by re-using the model weights from pre-trained models that were developed for standard computer vision benchmark datasets. In transfer learning, we basically try to exploit what has been learned in one task to improve generalization in another. We transfer the weights that a network has learned at ‘task A’ with a lot of available labeled training data to a new ‘task B’ that doesn't have much data. The knowledge of an already trained model is transferred to a different but closely linked problem throughout transfer learning; For example, if we trained a simple classifier to predict whether an image contains food, we could use the model's training knowledge to identify …
引用总数