Samenvatting
3D scene shape retrieval is a brand new but important research direction in content-based 3D shape retrieval. To promote this research area, two Shape Retrieval Contest (SHREC) tracks on 2D scene sketch-based and image-based 3D scene model retrieval have been organized by us in 2018 and 2019, respectively. In 2018, we built the first benchmark for each track which contains 2D and 3D scene data for ten (10) categories, while they share the same 3D scene target dataset. Four and five distinct 3D scene shape retrieval methods have competed with each other in these two contests, respectively. In 2019, to measure and compare the scalability performance of the participating and other promising Query-by-Sketch or Query-by-Image 3D scene shape retrieval methods, we built a much larger extended benchmark for each type of retrieval which has thirty (30) classes and organized two extended tracks. Again, two and three different 3D scene shape retrieval methods have contended in these two tracks, separately. To solicit state-of-the-art approaches, we perform a comprehensive comparison of all the above methods and an additional new retrieval methods by evaluating them on the two benchmarks. The benchmarks, evaluation results and tools are publicly available at our track websites (Yuan et al., 2019 [1]; Abdul-Rashid et al., 2019 [2]; Yuan et al., 2019 [3]; Abdul-Rashid et al., 2019 [4]), while code for the evaluated methods are also available: http://github.com/3DSceneRetrieval.
Originele taal-2 | Engels |
---|---|
Artikelnummer | 103070 |
Aantal pagina's | 21 |
Tijdschrift | Computer Vision and Image Understanding |
Volume | 201 |
DOI's | |
Status | Gepubliceerd - dec. 2020 |
Financiering
This project is supported by the University of Southern Mississippi, USA Faculty Startup Funds Award to Dr. Bo Li, and the Texas State Research Enhancement Program, USA and NSF, USA CR1-1305302 Awards to Dr. Yijuan Lu. We gratefully acknowledge the support from NVIDIA Corporation, USA for the donation of the Titan X/Xp GPUs used in this research and anonymous content creators from the Internet.