Modeling and control of image processing for interventional X-ray

A.H.R. Albers

Research output: ThesisPhd Thesis 1 (Research TU/e / Graduation TU/e)

4862 Downloads (Pure)

Abstract

This thesis presents techniques for modeling and control of X-ray image processing tasks, aiming at the fluent execution of a multitude of diagnostic and interventional X-ray imaging applications on a multi-core computing platform. A general trend in medical imaging systems is the execution of image processing on general-purpose programmable platforms, instead of specific dedicated hardware solutions. This even holds for the newest functionality such as image analysis, which performs computations in a more stochastic manner, rather than stream-based processing. The trend of more dynamic image processing and the new analysis functions in X-ray imaging, combined with the desire for more programmable and flexible future systems forms the starting point of our research. The first contribution of this thesis is the modeling and optimization of the processing performance of stream-based medical imaging tasks, in particular image quality enhancement for interventional X-ray. We have defined rules for specifying and dividing image processing tasks for parallel processing, to optimize the related memory communication. Similarly, for the computing architecture, we have specified the detailed timing requirements for data storage and communication, incorporating memory-access times. In the initial situation, the computing system was fully loaded, both in terms of memory and computation, and the latency was strongly time varying and could not be predicted. Our modeling has yielded a good understanding of the actual execution and its critical factors. The model has eventually lead to an approach for task splitting leading to a sharper system optimization with respect to essential parameters, such as latency. However, the method for building the model is time consuming. The same holds for the architecture. Nevertheless, the experiment has been highly valuable since the system optimization is associated with a considerable system cost and the resulting alternative execution architectures have provided ways for cost reduction or adding attractive functionality to the same system, which currently is executed on an additional separate platform. The second contribution of this thesis is the modeling and prediction of the computation time of feature-based medical imaging tasks, where the task complexity is non-deterministic and the resource demands fluctuate with the image content. We have focused in particular on applications for advanced diagnosis with image analysis and motion-compensated subtraction imaging. We demonstrated that the computation time for tasks with purely random resource usage can be successfully predicted with zero-order Markov chains. Furthermore, first-order Markov chains are used if temporal correlation between the computation time statistics exists for only short periods of time. For structural correlations between image frames, scenario-based methods have to be added to the obtained prediction model, extracted from the flow graph of tasks. Alternatively, when the computational complexity depends on external (spatial) factors, the prediction model is based on spatial (look-ahead) prediction. Experimental results have shown that it is possible to predict the computation time of feature-based medical image applications, even if the flow graph dynamically switches between groups of tasks. We have found an average accuracy for two different application scenarios between 95 – 97%, with sporadic excursions of the prediction error up to 20 – 30%. The third contribution of this thesis is the design of a control system for the fluent execution of a set of applications on a multi-core platform, where some of the applications have a fluctuating resource demand and others have strict low-latency requirements. As a possible solution, we have implemented options for scalability in applications for three application scenarios, using task scaling, task skipping and task delaying. A Quality-of-Service (QoS) control system then maintains constant throughput and latency by dynamically switching between application quality modes. A global resource manager maintains the overall resource usage of the system and local application controllers are responsible for resource estimation and quality control of each individual application. The global resource manager is based on a modified version of the Lagrangian relaxation algorithm that searches for suitable combinations of quality levels for a set of concurrent running applications. The research has been validated by executing three medical imaging applications in parallel, from which two are critical in latency. It is interesting to know that the above solution has a high industrial relevance because it can facilitate in reducing system costs for applications that are already released on the market. The work has resulted in a solution featuring a combination of both interventional and diagnostic signal processing, which are executed on a single computing device with nearly the same quality, as compared to two separated computers with those tasks in the original setting. Furthermore, the techniques can be employed for different classes of systems, with different cost-performance points.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Department of Electrical Engineering
Supervisors/Advisors
  • de With, Peter H.N., Promotor
Award date16 Dec 2010
Place of PublicationEindhoven
Publisher
Print ISBNs978-90-386-2398-6
DOIs
Publication statusPublished - 2010

Fingerprint

Image processing
X rays
Medical imaging
Flow graphs
Data storage equipment
Imaging techniques
Markov processes
Image analysis
Managers
Processing
Control systems
Costs
Communication
Cost reduction
Imaging systems
Image quality
Quality control
Scalability
Computational complexity
Quality of service

Cite this

Albers, A. H. R. (2010). Modeling and control of image processing for interventional X-ray. Eindhoven: Technische Universiteit Eindhoven. https://doi.org/10.6100/IR692938
Albers, A.H.R.. / Modeling and control of image processing for interventional X-ray. Eindhoven : Technische Universiteit Eindhoven, 2010. 201 p.
@phdthesis{c7a6288952ed486abec2628f7a68bb60,
title = "Modeling and control of image processing for interventional X-ray",
abstract = "This thesis presents techniques for modeling and control of X-ray image processing tasks, aiming at the fluent execution of a multitude of diagnostic and interventional X-ray imaging applications on a multi-core computing platform. A general trend in medical imaging systems is the execution of image processing on general-purpose programmable platforms, instead of specific dedicated hardware solutions. This even holds for the newest functionality such as image analysis, which performs computations in a more stochastic manner, rather than stream-based processing. The trend of more dynamic image processing and the new analysis functions in X-ray imaging, combined with the desire for more programmable and flexible future systems forms the starting point of our research. The first contribution of this thesis is the modeling and optimization of the processing performance of stream-based medical imaging tasks, in particular image quality enhancement for interventional X-ray. We have defined rules for specifying and dividing image processing tasks for parallel processing, to optimize the related memory communication. Similarly, for the computing architecture, we have specified the detailed timing requirements for data storage and communication, incorporating memory-access times. In the initial situation, the computing system was fully loaded, both in terms of memory and computation, and the latency was strongly time varying and could not be predicted. Our modeling has yielded a good understanding of the actual execution and its critical factors. The model has eventually lead to an approach for task splitting leading to a sharper system optimization with respect to essential parameters, such as latency. However, the method for building the model is time consuming. The same holds for the architecture. Nevertheless, the experiment has been highly valuable since the system optimization is associated with a considerable system cost and the resulting alternative execution architectures have provided ways for cost reduction or adding attractive functionality to the same system, which currently is executed on an additional separate platform. The second contribution of this thesis is the modeling and prediction of the computation time of feature-based medical imaging tasks, where the task complexity is non-deterministic and the resource demands fluctuate with the image content. We have focused in particular on applications for advanced diagnosis with image analysis and motion-compensated subtraction imaging. We demonstrated that the computation time for tasks with purely random resource usage can be successfully predicted with zero-order Markov chains. Furthermore, first-order Markov chains are used if temporal correlation between the computation time statistics exists for only short periods of time. For structural correlations between image frames, scenario-based methods have to be added to the obtained prediction model, extracted from the flow graph of tasks. Alternatively, when the computational complexity depends on external (spatial) factors, the prediction model is based on spatial (look-ahead) prediction. Experimental results have shown that it is possible to predict the computation time of feature-based medical image applications, even if the flow graph dynamically switches between groups of tasks. We have found an average accuracy for two different application scenarios between 95 – 97{\%}, with sporadic excursions of the prediction error up to 20 – 30{\%}. The third contribution of this thesis is the design of a control system for the fluent execution of a set of applications on a multi-core platform, where some of the applications have a fluctuating resource demand and others have strict low-latency requirements. As a possible solution, we have implemented options for scalability in applications for three application scenarios, using task scaling, task skipping and task delaying. A Quality-of-Service (QoS) control system then maintains constant throughput and latency by dynamically switching between application quality modes. A global resource manager maintains the overall resource usage of the system and local application controllers are responsible for resource estimation and quality control of each individual application. The global resource manager is based on a modified version of the Lagrangian relaxation algorithm that searches for suitable combinations of quality levels for a set of concurrent running applications. The research has been validated by executing three medical imaging applications in parallel, from which two are critical in latency. It is interesting to know that the above solution has a high industrial relevance because it can facilitate in reducing system costs for applications that are already released on the market. The work has resulted in a solution featuring a combination of both interventional and diagnostic signal processing, which are executed on a single computing device with nearly the same quality, as compared to two separated computers with those tasks in the original setting. Furthermore, the techniques can be employed for different classes of systems, with different cost-performance points.",
author = "A.H.R. Albers",
year = "2010",
doi = "10.6100/IR692938",
language = "English",
isbn = "978-90-386-2398-6",
publisher = "Technische Universiteit Eindhoven",
school = "Department of Electrical Engineering",

}

Albers, AHR 2010, 'Modeling and control of image processing for interventional X-ray', Doctor of Philosophy, Department of Electrical Engineering, Eindhoven. https://doi.org/10.6100/IR692938

Modeling and control of image processing for interventional X-ray. / Albers, A.H.R.

Eindhoven : Technische Universiteit Eindhoven, 2010. 201 p.

Research output: ThesisPhd Thesis 1 (Research TU/e / Graduation TU/e)

TY - THES

T1 - Modeling and control of image processing for interventional X-ray

AU - Albers, A.H.R.

PY - 2010

Y1 - 2010

N2 - This thesis presents techniques for modeling and control of X-ray image processing tasks, aiming at the fluent execution of a multitude of diagnostic and interventional X-ray imaging applications on a multi-core computing platform. A general trend in medical imaging systems is the execution of image processing on general-purpose programmable platforms, instead of specific dedicated hardware solutions. This even holds for the newest functionality such as image analysis, which performs computations in a more stochastic manner, rather than stream-based processing. The trend of more dynamic image processing and the new analysis functions in X-ray imaging, combined with the desire for more programmable and flexible future systems forms the starting point of our research. The first contribution of this thesis is the modeling and optimization of the processing performance of stream-based medical imaging tasks, in particular image quality enhancement for interventional X-ray. We have defined rules for specifying and dividing image processing tasks for parallel processing, to optimize the related memory communication. Similarly, for the computing architecture, we have specified the detailed timing requirements for data storage and communication, incorporating memory-access times. In the initial situation, the computing system was fully loaded, both in terms of memory and computation, and the latency was strongly time varying and could not be predicted. Our modeling has yielded a good understanding of the actual execution and its critical factors. The model has eventually lead to an approach for task splitting leading to a sharper system optimization with respect to essential parameters, such as latency. However, the method for building the model is time consuming. The same holds for the architecture. Nevertheless, the experiment has been highly valuable since the system optimization is associated with a considerable system cost and the resulting alternative execution architectures have provided ways for cost reduction or adding attractive functionality to the same system, which currently is executed on an additional separate platform. The second contribution of this thesis is the modeling and prediction of the computation time of feature-based medical imaging tasks, where the task complexity is non-deterministic and the resource demands fluctuate with the image content. We have focused in particular on applications for advanced diagnosis with image analysis and motion-compensated subtraction imaging. We demonstrated that the computation time for tasks with purely random resource usage can be successfully predicted with zero-order Markov chains. Furthermore, first-order Markov chains are used if temporal correlation between the computation time statistics exists for only short periods of time. For structural correlations between image frames, scenario-based methods have to be added to the obtained prediction model, extracted from the flow graph of tasks. Alternatively, when the computational complexity depends on external (spatial) factors, the prediction model is based on spatial (look-ahead) prediction. Experimental results have shown that it is possible to predict the computation time of feature-based medical image applications, even if the flow graph dynamically switches between groups of tasks. We have found an average accuracy for two different application scenarios between 95 – 97%, with sporadic excursions of the prediction error up to 20 – 30%. The third contribution of this thesis is the design of a control system for the fluent execution of a set of applications on a multi-core platform, where some of the applications have a fluctuating resource demand and others have strict low-latency requirements. As a possible solution, we have implemented options for scalability in applications for three application scenarios, using task scaling, task skipping and task delaying. A Quality-of-Service (QoS) control system then maintains constant throughput and latency by dynamically switching between application quality modes. A global resource manager maintains the overall resource usage of the system and local application controllers are responsible for resource estimation and quality control of each individual application. The global resource manager is based on a modified version of the Lagrangian relaxation algorithm that searches for suitable combinations of quality levels for a set of concurrent running applications. The research has been validated by executing three medical imaging applications in parallel, from which two are critical in latency. It is interesting to know that the above solution has a high industrial relevance because it can facilitate in reducing system costs for applications that are already released on the market. The work has resulted in a solution featuring a combination of both interventional and diagnostic signal processing, which are executed on a single computing device with nearly the same quality, as compared to two separated computers with those tasks in the original setting. Furthermore, the techniques can be employed for different classes of systems, with different cost-performance points.

AB - This thesis presents techniques for modeling and control of X-ray image processing tasks, aiming at the fluent execution of a multitude of diagnostic and interventional X-ray imaging applications on a multi-core computing platform. A general trend in medical imaging systems is the execution of image processing on general-purpose programmable platforms, instead of specific dedicated hardware solutions. This even holds for the newest functionality such as image analysis, which performs computations in a more stochastic manner, rather than stream-based processing. The trend of more dynamic image processing and the new analysis functions in X-ray imaging, combined with the desire for more programmable and flexible future systems forms the starting point of our research. The first contribution of this thesis is the modeling and optimization of the processing performance of stream-based medical imaging tasks, in particular image quality enhancement for interventional X-ray. We have defined rules for specifying and dividing image processing tasks for parallel processing, to optimize the related memory communication. Similarly, for the computing architecture, we have specified the detailed timing requirements for data storage and communication, incorporating memory-access times. In the initial situation, the computing system was fully loaded, both in terms of memory and computation, and the latency was strongly time varying and could not be predicted. Our modeling has yielded a good understanding of the actual execution and its critical factors. The model has eventually lead to an approach for task splitting leading to a sharper system optimization with respect to essential parameters, such as latency. However, the method for building the model is time consuming. The same holds for the architecture. Nevertheless, the experiment has been highly valuable since the system optimization is associated with a considerable system cost and the resulting alternative execution architectures have provided ways for cost reduction or adding attractive functionality to the same system, which currently is executed on an additional separate platform. The second contribution of this thesis is the modeling and prediction of the computation time of feature-based medical imaging tasks, where the task complexity is non-deterministic and the resource demands fluctuate with the image content. We have focused in particular on applications for advanced diagnosis with image analysis and motion-compensated subtraction imaging. We demonstrated that the computation time for tasks with purely random resource usage can be successfully predicted with zero-order Markov chains. Furthermore, first-order Markov chains are used if temporal correlation between the computation time statistics exists for only short periods of time. For structural correlations between image frames, scenario-based methods have to be added to the obtained prediction model, extracted from the flow graph of tasks. Alternatively, when the computational complexity depends on external (spatial) factors, the prediction model is based on spatial (look-ahead) prediction. Experimental results have shown that it is possible to predict the computation time of feature-based medical image applications, even if the flow graph dynamically switches between groups of tasks. We have found an average accuracy for two different application scenarios between 95 – 97%, with sporadic excursions of the prediction error up to 20 – 30%. The third contribution of this thesis is the design of a control system for the fluent execution of a set of applications on a multi-core platform, where some of the applications have a fluctuating resource demand and others have strict low-latency requirements. As a possible solution, we have implemented options for scalability in applications for three application scenarios, using task scaling, task skipping and task delaying. A Quality-of-Service (QoS) control system then maintains constant throughput and latency by dynamically switching between application quality modes. A global resource manager maintains the overall resource usage of the system and local application controllers are responsible for resource estimation and quality control of each individual application. The global resource manager is based on a modified version of the Lagrangian relaxation algorithm that searches for suitable combinations of quality levels for a set of concurrent running applications. The research has been validated by executing three medical imaging applications in parallel, from which two are critical in latency. It is interesting to know that the above solution has a high industrial relevance because it can facilitate in reducing system costs for applications that are already released on the market. The work has resulted in a solution featuring a combination of both interventional and diagnostic signal processing, which are executed on a single computing device with nearly the same quality, as compared to two separated computers with those tasks in the original setting. Furthermore, the techniques can be employed for different classes of systems, with different cost-performance points.

U2 - 10.6100/IR692938

DO - 10.6100/IR692938

M3 - Phd Thesis 1 (Research TU/e / Graduation TU/e)

SN - 978-90-386-2398-6

PB - Technische Universiteit Eindhoven

CY - Eindhoven

ER -

Albers AHR. Modeling and control of image processing for interventional X-ray. Eindhoven: Technische Universiteit Eindhoven, 2010. 201 p. https://doi.org/10.6100/IR692938