TY - JOUR
T1 - Damage detection using in-domain and cross-domain transfer learning
AU - Bukhsh, Zaharah
AU - Jansen, Nils
AU - Saeed, Aaqib
PY - 2021/12
Y1 - 2021/12
N2 - We investigate the capabilities of transfer learning in the area of structural health monitoring. In particular, we are interested in damage detection for concrete structures. Typical image datasets for such problems are relatively small, calling for the transfer of learned representation from a related large-scale dataset. Past efforts of damage detection using images have mainly considered cross-domain transfer learning approaches using pre-trained IMAGENET models that are subsequently fine-tuned for the target task. However, there are rising concerns about the generalizability of IMAGENET representations for specific target domains, such as for visual inspection and medical imaging. We, therefore, evaluate a combination of in-domain and cross-domain transfer learning strategies for damage detection in bridges. We perform comprehensive comparisons to study the impact of cross-domain and in-domain transfer, with various initialization strategies, using six publicly available visual inspection datasets. The pre-trained models are also evaluated for their ability to cope with the extremely low-data regime. We show that the combination of cross-domain and in-domain transfer persistently shows superior performance specially with tiny datasets. Likewise, we also provide visual explanations of predictive models to enable algorithmic transparency and provide insights to experts about the intrinsic decision logic of typically black-box deep models.
AB - We investigate the capabilities of transfer learning in the area of structural health monitoring. In particular, we are interested in damage detection for concrete structures. Typical image datasets for such problems are relatively small, calling for the transfer of learned representation from a related large-scale dataset. Past efforts of damage detection using images have mainly considered cross-domain transfer learning approaches using pre-trained IMAGENET models that are subsequently fine-tuned for the target task. However, there are rising concerns about the generalizability of IMAGENET representations for specific target domains, such as for visual inspection and medical imaging. We, therefore, evaluate a combination of in-domain and cross-domain transfer learning strategies for damage detection in bridges. We perform comprehensive comparisons to study the impact of cross-domain and in-domain transfer, with various initialization strategies, using six publicly available visual inspection datasets. The pre-trained models are also evaluated for their ability to cope with the extremely low-data regime. We show that the combination of cross-domain and in-domain transfer persistently shows superior performance specially with tiny datasets. Likewise, we also provide visual explanations of predictive models to enable algorithmic transparency and provide insights to experts about the intrinsic decision logic of typically black-box deep models.
KW - Cross-domain learning
KW - Damage detection
KW - In-domain learning
KW - Pre-trained models
KW - Transfer learning
KW - Visual inspection
UR - http://www.scopus.com/inward/record.url?scp=85110428256&partnerID=8YFLogxK
U2 - 10.1007/s00521-021-06279-x
DO - 10.1007/s00521-021-06279-x
M3 - Article
SN - 0941-0643
VL - 33
SP - 16921
EP - 16936
JO - Neural Computing and Applications
JF - Neural Computing and Applications
IS - 24
ER -