TY - GEN
T1 - NeurIPS’22 Cross-Domain MetaDL Challenge
T2 - 36th Annual Conference on Neural Information Processing Systems, NeurIPS 2022
AU - Carrión-Ojeda, Dustin
AU - Alam, Mahbubul
AU - Escalera, Sergio
AU - Farahat, Ahmed
AU - Ghosh, Dipanjan
AU - Diaz, Teresa Gonzalez
AU - Gupta, Chetan
AU - Guyon, Isabelle
AU - Ky, Joël Roman
AU - Lee, Xian Yeow
AU - Liu, Xin
AU - Mohr, Felix
AU - Nguyen, Manh Hung
AU - Pintelas, Emmanuel
AU - Roth, Stefan
AU - Schaub-Meyer, Simone
AU - Sun, Haozhe
AU - Ullah, Ihsan
AU - Vanschoren, Joaquin
AU - Vidyaratne, Lasitha
AU - Wu, Jiamin
AU - Yin, Xiaotian
PY - 2023
Y1 - 2023
N2 - Deep neural networks have demonstrated the ability to outperform humans in multiple tasks, but they often require substantial amounts of data and computational resources. These resources may be limited in certain fields. Meta-learning seeks to overcome these challenges by utilizing past task experiences to efficiently solve new tasks, achieving better performance with limited training data and modest computational resources. To further advance the ChaLearn MetaDL competition series, we organized the Cross-Domain MetaDL Challenge for NeurIPS’22. This challenge aimed to solve “any-way” and “any-shot” tasks from 10 domains through cross-domain meta-learning. In this paper, authored collaboratively by the competition organizers, top-ranked participants, and external collaborators, we describe the technical aspects of the competition, baseline methods, and top-ranked approaches that have been open-sourced. Additionally, we provide a detailed analysis of the competition results. Lessons learned from this competition include the critical role of pre-trained backbones, the necessity of preventing overfitting, and the significance of using data augmentation or domain adaptation techniques in conjunction with extra optimizations to improve performance.
AB - Deep neural networks have demonstrated the ability to outperform humans in multiple tasks, but they often require substantial amounts of data and computational resources. These resources may be limited in certain fields. Meta-learning seeks to overcome these challenges by utilizing past task experiences to efficiently solve new tasks, achieving better performance with limited training data and modest computational resources. To further advance the ChaLearn MetaDL competition series, we organized the Cross-Domain MetaDL Challenge for NeurIPS’22. This challenge aimed to solve “any-way” and “any-shot” tasks from 10 domains through cross-domain meta-learning. In this paper, authored collaboratively by the competition organizers, top-ranked participants, and external collaborators, we describe the technical aspects of the competition, baseline methods, and top-ranked approaches that have been open-sourced. Additionally, we provide a detailed analysis of the competition results. Lessons learned from this competition include the critical role of pre-trained backbones, the necessity of preventing overfitting, and the significance of using data augmentation or domain adaptation techniques in conjunction with extra optimizations to improve performance.
KW - Competition
KW - Cross-Domain Meta-Learning
KW - Few-Shot Learning
KW - Image Classification
UR - https://www.scopus.com/pages/publications/85179122533
M3 - Conference contribution
AN - SCOPUS:85179122533
T3 - Proceedings of Machine Learning Research
SP - 50
EP - 72
BT - Proceedings of the 36th Annual Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Ciccone, Marco
A2 - Stolovitzky, Gustavo
A2 - Albrecht, Jacob
PB - PMLR
Y2 - 28 November 2022 through 9 December 2022
ER -