Verifying Film Performance at the End-User Stage

Discussion and Methods

The practice of using a sensitometer and densitometer to monitor film "speed" or "Mid-Density," "Contrast" or "Density Difference," and gross fog is well known to those involved with quality control procedures for general radiographic and especially mammography film processing. While monitoring these processed film density numbers ensures a degree of film processing consistency, how do you know if "maximum film performance" is being achieved? Being able to reproduce certain densities in a processor monitoring program doesn.t necessarily mean that the film is performing optimally—it just means the processing conditions are stable.

This document discusses testing methods which are NOT commonly performed today. It is offered to illustrate the challenges present when and if you wish to "verify" the performance of a given manufacturer.s film or screen-film system with respect to common criteria such as average gradient and "speed," among others.

One film performance parameter that may be used in diagnostic imaging environments is a measurement of film contrast called average gradient. Average gradient numbers derived by the film manufacturer(s) are often published by those manufacturers. Owners of sensitometers and densitometers designed for processor control frequently use these devices to derive the average gradient of the films they use in their environments. The natural tendency is to compare their average gradient number with a number published by the film manufacturer for that film. The accuracy of such user-generated average gradient numbers can be suspect, for many reasons.

Commercially available sensitometers and densitometers designed for processor control should NOT be used to generate absolute "speed" and average gradient numbers. Most film manufacturers and the manufacturers of sensitometers and densitometers for processor control purposes are in agreement with this caution.

It is possible, however, for you to obtain better film performance information using commercially available sensitometers and densitometers IF certain test conditions (described in this document) are incorporated and followed. The test procedure utilizes a standard sensitometer and densitometer so film is exposed to simulated screen light is delivered to the film. This simulated light, however, does not always match the actual spectral emission of the intensifying screen(s) exposed to x-rays.

While the data yielded by this modified test method should be better or have improved significance vs. testing without these additional controls, the accuracy of the data can still be challenged. However, the methods suggested here would probably suffice for most users who seek on-site confirmation that their film is performing "optimally" or within specified tolerances.

One method to allow more precise on-site "verification" of optimal film performance could require:

  1. A manufacturer-generated "reference" characteristic curve for a given film, produced with a selected commercially available sensitometer whose individual exposure characteristics or calibration parameters are known or adjusted to certain settings. This curve would be generated under "ideal" processing conditions (i.e., using the film manufacturer.s chemicals, properly mixed, and having those solutions at the correct temperature for a specific film processor with a known developer immersion time). The chemicals from which the curve would be generated would be "fresh" and not "seasoned" unless very specific seasoning protocols were included. The "reference" characteristic curve would have "plus-and-minus" tolerances for "speed" and average gradient included in the form of two additional characteristic curves, drawn to either side of the "reference" curve.


  2. The end-user, equipped with the same brand of sensitometer and densitometer, both selected or calibrated to deliver the same performance characteristics as the manufacturer.s unit, would expose and process their film of the same brand and type at their location(s), using "fresh" and not "seasoned" chemicals. Chemical brand, developer starter solution brand, mixing method, and solution temperatures may be different, depending upon the end-user.s situation. The same differences might occur with film processor brand, model of processor, immersion time, and the condition of the processor. Such chemicals and equipment variety are unavoidable in most cases, but these variables among others are exactly why such a "verification curve" is useful. It can indicate whether, despite many variables, your processing environment is performing "optimally" (i.e., within tolerances indicated by the manufacturer). Notice that "matching" a manufacturer.s published average gradient number or "speed" number is not the goal here, since even more variables outside of this test procedure (normal film emulsion variability, film aging, film storage effects, etc.) can affect your generated data too. Rather, this proposed comparison method can yield better information versus testing with a "non-standardized" sensitometer/densitometer and using "seasoned" chemistry.

Obvious difficulties in performing the above test procedure correctly include obtaining "matched" sensitometers and densitometers for manufacturer and end-user and performing the test with fresh chemicals. Another question concerns the tolerance limits for the user-generated characteristic curve.

This test method could be refined further and the user-generated data would be another step closer to "accurate" if both manufacturer and end-user test film were not just from the same emulsion batch, but from the same box. A "control" box or quantity of film at the manufacturer.s site might be selected and set aside (under controlled storage conditions) for exposures as needed.

Samples could be exposed at the manufacturer.s site, some exposed but unprocessed strips could be sent overnight to the end-user, and both parties would process the films at a specific time to minimize differences from delayed processing (latent image changes). This would remove the requirement for both parties to acquire similarly calibrated sensitometers, but densitometer calibration would remain a variable, albeit perhaps a lesser one in most cases. User-processed films could be returned to the manufacturer for densitometer readings and comment, or the manufacturer might send its processed film strips to the end-user and let the end-user read both strips with the user.s densitometer. If necessary, the manufacturer.s "verification" curves would be drawn taking into account the latent image keeping changes from delayed processing, and the "reference" average gradient and "speed" expectations would reflect this too.

On a side note, the practice of mailing pre-exposed sensitometric strips to end-users is not new. In the early days of processor monitoring, (circa 1970.s), one major x-ray film manufacturer offered and mailed pre-exposed sensitometric strips to customers for processor monitoring purposes. It was later discovered that these boxes of pre-exposed sensitometric strips, when kept for long periods of time (i.e., weeks to months) before processing, were less accurate or reliable indicators of processing consistency than freshly exposed and processed strips, and this "service" was dropped.

If an end-user desires to compare the visual image contrast of different screen-film combinations (usually for clinical image selection or comparison), the recommended method is to do a "split-film" test. Place half of two different films into the same cassette, and with a single x-ray exposure, radiograph an appropriate test object, such as an aluminum step wedge. This test can also be performed to compare the "speed" and "contrast" differences between different emulsion batches of the same film.