Techniques for the determination of the optimal performance of high resolution computerised tomography

Abstract: Techniques to deternine the optimal performance of high resolution computerised tomography (CT) has been studied, that is, optimal data collection parameter settings for the imaging task at issue and its detectability limits.Generally, CT is an X-ray based non-destructive testing method that was developed for medical purposes in the seventies and introduced for industrial applications the latter half of the eighties. CT produces maps of the X-ray linear attenuation coefficient of an object's interior; it is presented either as cross section images (two-dimensional CT) or as volume information (three-dimensional CT). The linear attenuation coefficient is the absorption and scatter of X-rays per length as it propagates through an object. The linear attenuation coefficient depends on the X-ray photon energy, and both object density and atomic composition.Most industrial CT-systems are equipped with conventional X-ray tubes that produce X-ray photons with an energy distribution, that is, a spectrum. Consequently, the effective linear attenuation coefficient of an object, shown by CT, depends on the full energy spectrum, how it changes as it propagates through the object, and how it interacts with the detector. To emphasise contrasts in the final CT-image, caused by density or compositional variations in the object investigated, the energy spectrum has to be chosen and shaped with filters in a way so that the differences in effective linear attenuation coefficient increase. However, it is empirically tedious to find optimal CT parameter settings, particularly for industrial CT, because of the wide range of materials, applications, and imaging tasks of interest.The main result from this work is the simulation model of the data collection process that makes it possible to determine optimal operator parameter settings that maximise the detectability for an arbitrary imaging task and predict its detectability limits. The simulation model can be used to correct for beam hardening artefacts in CT-images.There are several important partial results: (1) definition of quality of the CT-data in relation to the imaging task, including a model of the X-ray paths and how it is used to predict the optimal performance; (2) a model and method to determine how the information of the imaged object transfer from the detector entrance screen through the detector chain to CT projection data and further on to the final CT image. without detailed knowledge of each stage in the detector chain; (3) a model and method of how the total unsharpness of the CT-system is determined, in terms of modulation-transfer-function as a function of spatial frequency; and (4) the commonly used contrast-detail-curve, together with the limiting perception factor for detection of small details, is developed here to the more useful object detail-detectability curve.It has been shown that to model the data collection process for CT, a polyenergetic model is needed. Such a model consists of: complete X-ray energy spectra that are produced by the X-ray source used and a detector response model of how the X-rays impart energy to the detector entrance screen. Absolute X-ray spectra were measured using a Compton spectrometer. The detector response was determined using Monte Carlo photon transport simulations. It is further shown that X-ray source leak radiation increases image noise when the generated X-ray spectrum contains photons with energies over the K-edges of the enclosure wall material.

  This dissertation MIGHT be available in PDF-format. Check this page to see if it is available for download.