The need for quantitative image analysis in radiology is universal: computer-aided detection, segmentation for 3D volume visualization, image enhancement, pattern recognition etc. All need effective, robust and preferably generic (not ad hoc) algorithms for the computer. How to design such algorithms? A good inspiration source is the functionality of the visual system, the best investigated brain structure today. In this talk we will explain how we think the brain calculates features in images, why the retina measures at a wide range of resolutions and how we can exploit this in multi-scale analysis, and how we can learn to understand and exploit the Gestalt Laws. The visual system is strongly adaptive and self-learning. New optical recording techniques have given new insight in how the cells in the visual cortex are functioning. We will go through these functionalities step-by-step.What we discover, is quite amazing. We recognize huge amounts of filter banks in the first stages of vision: many filters analyse each pixel of the incoming image at a range of scales, orientations, derivative order, for each colour, and also as a function of time. Extensive feedback loops take care of optimal settings locally.We programmed these filters into the computer, and were able to build many interesting applications for (bio-) medical imaging: detection of catheters at seriously reduced levels of fluoroscopy X-ray radiation dose, automatic detection of polyps in the colon, quantitative analysis of ischemic heart ventricle deformation, breast cancer CAD, pulmonary emboli CAD and analysis of in-vivo microscopy images now so abundant in modern life-sciences research.
|Title of host publication||Proceedings of the European Conference on Radiology (ECR 2010), 4-8 March 2010, Vienna, Austria|
|Publication status||Published - 2011|