This thesis presents techniques for modeling and control of X-ray image processing tasks, aiming at the fluent execution of a multitude of diagnostic and interventional X-ray imaging applications on a multi-core computing platform. A general trend in medical imaging systems is the execution of image processing on general-purpose programmable platforms, instead of specific dedicated hardware solutions. This even holds for the newest functionality such as image analysis, which performs computations in a more stochastic manner, rather than stream-based processing. The trend of more dynamic image processing and the new analysis functions in X-ray imaging, combined with the desire for more programmable and flexible future systems forms the starting point of our research. The first contribution of this thesis is the modeling and optimization of the processing performance of stream-based medical imaging tasks, in particular image quality enhancement for interventional X-ray. We have defined rules for specifying and dividing image processing tasks for parallel processing, to optimize the related memory communication. Similarly, for the computing architecture, we have specified the detailed timing requirements for data storage and communication, incorporating memory-access times. In the initial situation, the computing system was fully loaded, both in terms of memory and computation, and the latency was strongly time varying and could not be predicted. Our modeling has yielded a good understanding of the actual execution and its critical factors. The model has eventually lead to an approach for task splitting leading to a sharper system optimization with respect to essential parameters, such as latency. However, the method for building the model is time consuming. The same holds for the architecture. Nevertheless, the experiment has been highly valuable since the system optimization is associated with a considerable system cost and the resulting alternative execution architectures have provided ways for cost reduction or adding attractive functionality to the same system, which currently is executed on an additional separate platform. The second contribution of this thesis is the modeling and prediction of the computation time of feature-based medical imaging tasks, where the task complexity is non-deterministic and the resource demands fluctuate with the image content. We have focused in particular on applications for advanced diagnosis with image analysis and motion-compensated subtraction imaging. We demonstrated that the computation time for tasks with purely random resource usage can be successfully predicted with zero-order Markov chains. Furthermore, first-order Markov chains are used if temporal correlation between the computation time statistics exists for only short periods of time. For structural correlations between image frames, scenario-based methods have to be added to the obtained prediction model, extracted from the flow graph of tasks. Alternatively, when the computational complexity depends on external (spatial) factors, the prediction model is based on spatial (look-ahead) prediction. Experimental results have shown that it is possible to predict the computation time of feature-based medical image applications, even if the flow graph dynamically switches between groups of tasks. We have found an average accuracy for two different application scenarios between 95 – 97%, with sporadic excursions of the prediction error up to 20 – 30%. The third contribution of this thesis is the design of a control system for the fluent execution of a set of applications on a multi-core platform, where some of the applications have a fluctuating resource demand and others have strict low-latency requirements. As a possible solution, we have implemented options for scalability in applications for three application scenarios, using task scaling, task skipping and task delaying. A Quality-of-Service (QoS) control system then maintains constant throughput and latency by dynamically switching between application quality modes. A global resource manager maintains the overall resource usage of the system and local application controllers are responsible for resource estimation and quality control of each individual application. The global resource manager is based on a modified version of the Lagrangian relaxation algorithm that searches for suitable combinations of quality levels for a set of concurrent running applications. The research has been validated by executing three medical imaging applications in parallel, from which two are critical in latency. It is interesting to know that the above solution has a high industrial relevance because it can facilitate in reducing system costs for applications that are already released on the market. The work has resulted in a solution featuring a combination of both interventional and diagnostic signal processing, which are executed on a single computing device with nearly the same quality, as compared to two separated computers with those tasks in the original setting. Furthermore, the techniques can be employed for different classes of systems, with different cost-performance points.
|Kwalificatie||Doctor in de Filosofie|
|Datum van toekenning||16 dec 2010|
|Plaats van publicatie||Eindhoven|
|Status||Gepubliceerd - 2010|