Understanding and Managing Noise Sources in X-ray Imaging
Reading Time: 14 minutes read
A guide to the science behind X-ray noise.
As a radiographer, your goal from day to day, from patient to patient, is to complete an imaging exam that provides sufficient information for an accurate clinical diagnosis – at the lowest possible dose. You have a very defined goal that you need to achieve with an imaging system that is inherently “noisy”.
In this blog, I will explain some of the noise sources in X-ray imaging; and the science behind how we transform them into a pristine visual for medical diagnosis. As you will see, there are 2 main challenges. One is that the X-ray process is governed by fundamental laws of nature that we cannot alter and whose characteristics introduce unavoidable “noise”. The second is that the multiple processes in the X-ray image capture process also generate noise, but they are amenable to optimization through careful detector design.
Fundamental noise characteristics of X-rays
The generation and detection of X-rays is – by the very nature of the physical interactions that are taking place – a “random” process. There is uncertainty in the output caused by the laws of nature that control what is going on. From the thermal emission of the electrons from the X-ray tube’s filament (cathode), to the generation of the X-ray photons as these electrons are accelerated and collide with the anode, each step in the generation process is “statistical” in nature.
This means that for a given exposure level, the number of X-rays impinging on the patient is different at different locations on the patient’s body. These incident X-rays pass through the patient’s anatomy. Some are absorbed by the patient while others pass through and are absorbed by the imaging detector – another statistically controlled process with its own inherent noise characteristics. Once the X-rays have passed through the patient, the image “information” is contained in the spatial distribution of the X-ray fluence.
The patient’s anatomy has created variations in the X-ray intensity that the imaging system uses to create the image. But overlaying this image “signal” is the inherent statistical “noise” associated with the X-ray production process.
In an exam where only a small amount of radiation has been used to create the image (low exposure), the distracting visual appearance of the statistical noise (sometimes known as “salt and pepper” noise) relative to the size of the signal variations generated by the patient’s anatomy, can reduce the visibility of subtle, clinically important features. This can lower the radiologist’s diagnostic confidence.
In contrast, when a large amount of radiation is used, the visibility of the statistical noise is very low, perhaps even imperceptible. While this can result in a visually pleasing image, it may mean that an unnecessarily high exposure level was used, resulting in overexposure to the patient.
This is the reason that most radiologists like to see “some noise” in an image. It tells them that the proper level of radiation, and hence patient exposure, was used. Radiologists are accepting of more noise for an easy-to-diagnose injury, such as a break in a major bone, versus a subtle pathology where the disease features may be difficult to discern. Always, there is a balancing act of using sufficient radiation to achieve a confident clinical diagnosis at the lowest dose possible. You likely know this guiding principle of radiation safety: “as low as reasonably achievable (ALARA)”.
Up to this point, I have been discussing noise associated with the statistical nature of X-ray production and their subsequent absorption by the patient. These processes are controlled by fundamental laws of nature and, for any given X-ray acquisition, they determine the fundamental limit on image quality. Now I’ll explain the role of the detector system.
The detector’s role in managing noise in medical imaging
The design of a detector determines how close the final displayed image comes to that fundamental threshold on image quality that I described earlier – and that, once again, is limited by the laws of nature. It is interesting to note that the fundamental theories and understanding of the physical processes involved in the transfer of the image information as it passes through the different stages in the detector were discovered by Kodak/Carestream scientists in the late 1980’s (1).
The digital detector in an X-ray imaging system takes the differences in X-ray intensity that are the result of variations in patient X-ray absorption. It turns them into an electrical signal that can be used to display a visual representation of the patient anatomy.
In Carestream systems, the first step in this process is to convert the X-ray energy into light energy. This is accomplished by a material known as a “phosphor”. It absorbs the X-ray photons and generates light whose intensity is related to the number of X-rays absorbed at any given position. The intensity of this light is then measured by a two dimensional array of pixels that convert the light to an electrical signal. The magnitude of this electrical signal is proportional to the light intensity hitting the pixel. This is in turn related to the X-ray intensity at that location in space, which is in turn determined by the X-ray absorption characteristics of the patient. The magnitude of the electrical charge in each pixel is measured and “digitized” and then stored in a computer to be used to control the brightness of the pixels on the computer display.
Like the X-ray generation steps, the physical processes at work in each of these transitions are statistical in nature and as such introduce additional amounts of noise into the image. The goal of developing an “optimized” X-ray detector is to try to minimize the amount of additional noise each transition injects into the process. Proper detector design that takes into account the details of the physical processes involved can have a significant impact on the quality of the final image.
The first step is to try and ensure that the detector absorbs as much of the incident X-rays as possible. This is achieved by using a phosphor that has a high content of heavy material that has high efficiency for absorbing X-rays. Carestream detectors use either a cesium iodide or a gadolinium oxysulfide phosphor. Both materials have high X-ray absorption properties and are efficient at transferring the X-ray energy into light energy. The thicker the layer of these materials, the more X-rays the material will absorb which is beneficial.
However there is a trade-off between phosphor thickness and absorption and the sharpness (or spatial resolution) that is achieved in the final image. This is because the thicker the phosphor, the more opportunity there is for the light generated from the absorbed X-rays to spread laterally and reduce the perceived sharpness in the resulting image.
Cesium iodide has an advantageous property: it can be grown in a layer of thin needle-like structures that reduce the lateral spreading of the light compared to a comparable thickness of gadolinium oxysulfide. So for a given X-ray absorption, or phosphor thickness, an image from a detector using a cesium iodide phosphor will appear “sharper” than one using a gadolinium oxysulfide phosphor. There are a number of aspects of this process that must be kept in mind when optimizing the design of this crucial component of an X-ray detector.
Once the light has been generated, it travels through the phosphor and is absorbed by the underlying pixels. The transport of the light through the phosphor and its absorption by the pixel is yet another statistical process which introduces additional uncertainty, or noise, into the image information. The final stage is the readout and measurement of the electrical charge generated by the incident light. Yet another opportunity to add noise into the image.
The X-ray process and the multiple processes in X-ray image capture each generate noise that must be managed.
Scientists characterize the amount of “extra” noise that the detector introduces into the image, over and above the noise that was inherent in the incident X-ray fluence, as the Detective Quantum Efficiency (DQE) of the detector. This is basically the ratio of the signal to noise in the final image to the “original” signal to noise present in the incident X-ray fluence. A detector always adds some amount of noise into the image so the DQE is always less than 1. (DQE is typically expressed as a percentage so a detector’s DQE will always be <100%).
Care should be taken when comparing different system DQEs. It is important that this metric is measured consistently and in accordance with internationally recognized standards (IEC 62220-1). When measured and reported accurately, it gives important information on how efficiently a detector uses the incident X-rays in creating a high quality image.
Scattered radiation: another noise source in X-ray imaging
Yet another noise source with the potential to degrade image quality in diagnostic radiology is scattered radiation. This is caused by X-rays that are generated by the patient as the incident X-rays from the source are absorbed and redirected. They can create a significant amount of signal and additional noise in the detector. And this “signal” does not carry any useful information. It manifests itself as a haze, or a glare in the final image that can reduce contrast and detail, creating the potential for reduced visibility of vasculature, infiltrates and other pathology. (2)
Scatter increases when imaging thicker areas of the body – such as the chest. Traditional methods of reducing scatter are collimation, anti-scatter grids, and/or utilizing an air-gap.
Collimation, or beam-limiting devices, are used to decrease the amount of unnecessary scatter radiation to the patient. Many modern imaging systems come equipped with automatic collimation, in which the system senses the size of the receptor being used and collimates to its outer edges. This may be acceptable when imaging a hand on an 18 x 24 cm cassette, which will fill the entire field. However, when using a larger cassette, this leaves areas of unattenuated beam that promote scatter production and decreased contrast in the image.
The recommendation is always to collimate close to the skin line of the patient, whenever possible. When you’re using digital imaging equipment, it will also help the image processing software better identify the correct region of interest for optimal image processing. Radiographers need to be aware of the effects that scatter radiation will have on image quality, and collimate appropriately to reduce these effects.
The most common application to reduce scatter is the use of anti-scatter grids. A physical anti-scatter grid is like a Venetian blind in the open position. The ‘blinds’ are parallel strips made of lead as well as strips composed of a radiolucent material. The technologist situates the grid in between the detector and patient. The beam, its path parallel to the radiolucent strips, passes freely between them. Radiation scatter is largely blocked by the angled lead strips before it can reach the detector and compromise the image. This helps preserve clarity and diagnostic value.
Grids can be highly effective in reducing scatter. However, they typically require a higher dose of radiation exposure, as the X-ray beam is attenuated by the lead strips. Also, grids are heavy and bulky. This can lead to misalignment during positioning, which can reduce the grid’s efficacy.
Also, from the perspective of the radiographic technologist, using grids for portable exams involves a variety of time-consuming workflow implications. These include attaching and detaching the add-on grids to the X-ray cassettes; the stringent requirements to properly position and align the X-ray source relative to the cassette behind the patient to avoid grid cutoff; the increased probability that repeated exposures will be required due to grid-cutoff artifact; and more. Because they are inconvenient to handle, radiologic technologists might be discouraged from using grids altogether in some situations.
Managing scattered radiation – without a grid
In 2017, Carestream introduced software that reduces the damaging effects of scatter radiation in an image – helping to improve the contrast of the image when a physical anti-scatter grid is not used. Carestream’s software, called SmartGrid, uses an advanced algorithm that estimates low-frequency scatter distributed throughout an image and reduces it.
SmartGrid is available for use with all of Carestream’s modalities, and also with mobile systems from other manufacturers upgraded to digital with Carestream’s DRX Retrofit Kits. This makes it a highly effective solution to the issue of compromised image quality in mobile chest X-exams.
SmartGrid processing provides image quality comparable to images acquired with an anti-scatter grid lowering radiation dose in bedside chest imaging. The benefits of grid-like image quality without the use of an anti-scatter grid can lead to improved workflow and ease of imaging for radiographers, producing a win-win for a busy hospital.
In the future, we plan to have scatter suppression for all body parts in a future release of our ImageView software.
Processing software is the diagnostic differentiator
Now that we’ve captured the image in our detector, let’s take a look at the processing software. Candidly, the capture process in detectors is somewhat standard. The real differentiator for diagnostic confidence is the image processing software. If your processing software is not optimal – and it does not process the image well for human visualization – it will impede the clinical diagnosis. The high quality of our image processing software is a strong differentiator for Carestream.
The visual appearance of noise varies spatially in the image as it corresponds to the relatively higher and lower exposure regions. To accommodate this variation, our EVP Plus software applies greater suppression of noise to regions of lower exposure.
Our EVP Plus software is built on our Eclipse image processing engine. Eclipse uses powerful, proprietary algorithms to provide automated and robust image processing that delivers superb image quality and consistent presentation. It is the foundation that enables numerous advanced image processing features.
For example, our Dual Energy and Digital Tomosynthesis are our latest innovations that provide enhanced image quality that enables improved visualization of buried pathology, and improved dose efficiency.
Dual Energy uses patented differential filtration to subtract rapidly acquired low- and high-energy acquisitions for the generation of bone and soft tissue images. What does that mean? It means that a radiologist can actually ‘subtract’ images that are captured to see what lies underneath. For example, overlying rib structures can be subtracted to give increased visualization of abnormalities of the lung – without added dose.
Digital Tomosynthesis is a technique where a finite number of projection images are acquired at varying orientations of the X-ray tube, patient and detector. These projection images are then reconstructed into a series of coronal slices through the object that is parallel to the detector. And how does that improve diagnosis? It removes clutter associated with over and underlying anatomy.
I hope this guide has helped you understand better the science – and the art – of managing noise sources in diagnostic imaging.
John Yorkston, PhD, is a Senior Researcher in Clinical Applications Research at Carestream Health. He has 30 years of experience in medical imaging research.
- Detective Quantum Efficiency of Imaging Systems with Amplifying and Scattering Mechanisms – National Center for Biotechnology Information
- Perry Sprawls, Ph. D., Scattered Radiation and Contrast