Friday, April 17, 2009

09_02

Objective Functions
Dose-Based Objective Functions
A simple example of an objective function is the criteria stated in terms of the sum of the squares of the differences of desired dose and computed dose at each point within each of the volume of interest. That is,

P.248


This type of objective function is called the quadratic or variance objective function. The optimization process attempts to minimize the treatment plan score S.DT, 0 in expression (Eq. 9.1) is the desired dose to the target volume and Dn, 0 is the tolerance dose of the n'th normal structure. DT, i is the computed dose in the i ‘th voxel of the target and Dn, j is the computed dose at the j ‘th voxel of the n'th normal structure. For normal organs, the function H(Dn, j - Dn, 0) is defined as follows:

In other words, so long as the dose in a normal tissue voxel does not exceed the tolerance limit, the voxel does not contribute to the score function. The quantity pn is the “relative penalty” for exceeding the tolerance dose.
Dose–Volume-Based Objective Functions
Purely dose-based criteria, such as the one previously described, are not sufficient. In general, the response of the tumor and normal tissues is a function of not only radiation dose but also (to varying degrees depending on the tissue type) of the volume subjected to each level of dose. Currently, dose–volume-based objective functions are the most widely used clinically. Dose–volume-based objective functions are expressed in terms of the limits on the volumes of each structure that may be allowed to receive a certain dose or higher.
A practical scheme to incorporate dose–volume-based objectives has been suggested by Bortfeld et al. (12). It is explained in Figure 9.11 using a simple schematic example of one organ at risk. The dose-volume constraint is specified as V (>D1)
That is, only the points with dose values between D1 and D2 contribute to the score. Therefore, they are the only ones penalized.
For the target volumes, two types of dose-volume criteria may be specified to limit both the hot and cold spots. For instance, for the desired target dose of 80 Gy, we may specify V (>85 Gy) ≤5% and V (>79 Gy) ≥95%. In other words, the volume of the target receiving dose greater than 85 Gy should be no more than 5%, and the volume of target receiving 79 Gy or higher should be at least 95%. Dose-based criteria can be considered as a subset of the dose-volume criteria in which the volume is set to an extreme value (0% or 100%, as appropriate). Dose-volume criteria provide more flexibility for the optimization process and greater control over dose distributions. The reason is that dose-based optimization penalizes all the points above the dose limit, whereas the dose–volume-based optimization penalizes only the subset of points within the lower end of range of dose values above the dose limit. For the example of Figure 9.11, the dose–volume-based optimization process attempts to bring only the points between D1 and D2 into compliance with the constraint. In contrast, the dose-based optimization process attempts to constrain all of the points above D1. Furthermore, dose-volume criteria are highly “degenerate” functions of dose distributions (i.e., there are an infinite number of dose distributions that correspond to the same dose-volume constraint). Therefore, the optimization system has a large solution space to choose from, making it easier to find a better solution.
Limitations of Dose–Volume-Based Objective Functions
Dose–volume-based criteria have been demonstrated to have limitations. To illustrate one such limitation, consider the example in Figure 9.12A of a normal structure for which a constraint has been specified that no more that 25% of the volume is to receive 50 Gy or higher. All three dose–volume histograms (DVHs) shown meet this criteria. However, the DVH represented by the solid curve clearly causes the least damage. One can argue that we can overcome this limitation by specifying multiple dose-volume constraints or even the entire DVH. However, as illustrated in Figure 9.12B, this would be too limiting. Multiple
P.249

DVHs, in fact an infinite number of them, could lead to an equivalent injury to a particular organ, but each DVH may produce a different effect on other organs and the tumor. When this happens, DVHs usually cross each other, as shown in Figure 9.12B. Only one of them is optimum so far as the tumor and other organs are concerned.
To overcome the limitations of dose–volume-based criteria, they may be supplemented with biologic (or dose–response-based) criteria, for instance, in terms of such indices as tumor control probability (TCP), normal tissue complication probabilities (NTCPs), and equivalent uniform dose (EUD). Dose–response-based objective functions are the subject of ongoing investigations (94,144).
Objective Function Parameters
The desired IMRT dose distributions are specified in terms of parameters of the objective function. In Equation 9.1, for instance, the parameters of the objective function are the desired dose limits DT,0 and Dn,0 for target and normal structures and the relative importance (or penalty) factors pn for deviating from desired dose limits. Most often, the objective functions are specified in terms of one or more “soft” dose-volume constraints for each volume of interest, one for each constraint. That is, if the computed dose deviates from the desired value, the plan is not rejected, but it is assessed a penalty. The optimization software computes a “subscore” corresponding to each constraint. The subscore value depends on the deviation of dose distribution from the desired dose distributions and the penalty factor. The overall score of an IMRT plan is an accumulation of subscores of individual volumes of interest. The IMRT optimization system uses the IMRT plan score to arrive at the optimum plan according to the specified objective function. The optimized solution involves trade-offs that balance specified normal tissue objectives against each other and against tumor objectives. An IMRT treatment-planning system should provide parameters that allow the treatment planner to adjust the trade-off for each critical structure in a straightforward manner. An example of this is shown in Figure 9.13, where a head and neck target volume nearly abuts the parotid gland (9). Plans C and F use parameters that emphasize parotid-gland sparing and tumor coverage, respectively. This is an excellent example of the flexibility of moving the steep dose gradient in and out of the target volume.
The plan considered to be the best by the computer may not be the best (or even good enough) by the treatment planner. Parameters are adjusted by trial and error to obtain a satisfactory plan. A confounding factor is that a change in a parameter of one volume of interest affects not only its own subscore and DVH but also the subscores and DVHs of other structures in a complicated manner. For a complex IMRT problem, in which there may be several dozen parameters, their adjustment is an extremely difficult task. The trial-and-error approach used currently is time-consuming and leads to suboptimal results. Future research based on artificial intelligence techniques may provide a systematic means of determining optimum parameter values.
Treatment Plan Evaluation
IMRT dose distributions tend to be highly conformal but complex and unconventional. Traditional methods of evaluation and reporting may be too limited for such dose distributions. In principle, the target dose distributions for IMRT should be more homogeneous than for 3DCRT. In practice, the opposite is the case, due in part to the competing demands of sparing of normal tissues and in part to the inadequacy of objective functions. Dose distributions in normal structures as well are, in general, more nonuniform than for 3DCRT.
In the current practice of radiotherapy, treatment plans are evaluated using dose and dose-volume parameters including such quantities as dose to a point in the volume of interest, minimum dose, maximum dose, minimum dose to a specified fractional volume, or the volume of the structure receiving a specified dose or higher. MUs are set to deliver the prescribed
P.250

dose to a specified point or to an isodose line (or surface) just enclosing the target volume. For some sites and techniques (e.g., stereotactic radiosurgery of brain tumors), an index of conformality (the ratio of volume occupied by the prescription isodose surface and the volume of the target) is used for plan evaluation. Cumulative dose and dose-volume data are reported as a part of the patient's chart and used for correlation with outcome.
Because of the unconventional nature of IMRT dose distributions, especially the high degree of dose heterogeneity and fluctuations in dose as a function of position in volumes of interest, indices such as dose to a point, minimum dose, or maximum dose may not correlate well with dose response. Instead, dose to a specified fractional volume is more appropriate. For instance, dose to 98% or 99% and to 1% or 2% of the target volume may be more meaningful than minimum and maximum dose, respectively.
Limitations of dose and dose-volume plan evaluation parameters have been articulated in the literature (49). These limitations become more significant for the complex dose distributions of IMRT. It has been argued that biophysical dose-response indices, which summarize complex dose distributions using a single clinically relevant index in each volume of interest, may be more appropriate. Currently, indices such as TCP, NTCPs, and biologically EUD often are computed and recorded, but rarely are used for routine plan evaluation. This is because of the unreliability of published dose-response data and weaknesses of models to compute these indices. This is, in turn, the result of the various sources of uncertainty in both the quantification of response and in doses delivered to the structures. In a recent report, Levegrun et al. (73), from their analysis of patients with prostate cancer treated at Memorial Sloan-Kettering Cancer Center, concluded that the biopsy-based response did not correlate with minimum tumor dose, EUD, or TCP. Instead, they found the mean dose to be a very good predictor of response. They attributed this observation to large treatment margins for PTV, substantial target motion, and relatively homogeneous dose distributions. There are similar other examples in the literature in which lack of dose response or correlation with mean dose may be attributable to uncertainties in the extent of the disease, target position, normal structure positions, and dose delivered. Such examples highlight the importance of continued efforts to reduce sources of uncertainty.
Generation of Leaf Sequences
Fixed Intensity-Modulated Fields
For the IMRT mode using multiple fixed fields, the plan optimization process produces nonuniform intensity distributions (Fig. 9.14) for each set of fields. In principle, such intensity distributions can be delivered using custom-fabricated compensators made of lead alloys to attenuate the appropriate amount of radiation along each ray of the beam. Such devices would have to be produced using computerized milling machines. In addition, to use them it would be necessary for the operator (radiation therapist) to enter the treatment room to insert the device for each field. This process would be highly labor-intensive and impractical considering that a large number of beams often may be needed for optimum intensity-modulated treatments.
The most efficient means of delivering fixed-field IMRT is the standard MLC in dynamic mode using such methods as “sliding-window” technique or the step-and-shoot technique. In either
P.251

case, leaf position sequences as a function of MUs need to be generated. The MLC leaves are made of approximately 5- or 6-cm thick tungsten and are typically 0.5 or 1 cm wide (projected to isocenter). MLCs with leaves of a width as small as 1 mm have been introduced. Smaller leaf width may be of greater value for IMRT than for standard 3DCRT. For the former, the leaf width affects the dose delivered to the entire slice, whereas for the latter, it affects only the shape of the boundary. A smaller leaf width undoubtedly would produce more conformal dose distributions, but the electromechanical complexity and cost of the device would increase. Because of the smearing caused by finite-sized radiation sources, lateral secondary electron transport, and the use of multiple fields and because of motion and positioning uncertainties, an acceptable leaf width may not need to be very small. The minimum desirable leaf width would depend on numerous factors including shapes and locations of volumes of interest, dose gradients desired, and number and orientations of beams. Although the issue of leaf width has been debated for quite some time, there are no definitive studies to guide the choice of the most suitable width.
MLCs transmit only 0.5% to 2% of incident radiation (except through small interleaf gaps and the rounded ends of some MLCs). However, as discussed later in this chapter, because intensity-modulated treatments require substantially larger number of MUs than do the conventional uniform field treatments, the cumulative effective transmission may be considerably larger.
Leaf Sequence Generation—Sliding-Window Technique
In the sliding-window method, the gap formed by each pair of opposing leaves is swept across the target volume under computer control while the radiation is on. The gap opening and its speed are optimally adjusted. Because the dose rate of the treatment machine might fluctuate slightly, the motion is indexed to MUs rather than time. The basic principle is that as the gap slides across a point, the radiation received by the point is proportional to the number of MUs delivered during the time the tip of the leading leaf goes past the point and exposes it until the tip of the trailing leaf moves in to block it again. (The point also receives additional radiation transmitted through or scattered from the leaves, which must be accounted for. See later discussion in this chapter.) The setting of the gap opening and its speed for each pair at any instant are determined by a technique first introduced by Convery and Rosenbloom (33) and refined and studied further by Bortfeld et al. (10), Spirou and Chui (117) and Spirou et al. (118), Stein et al. (119), Svensson et al. (122), and others (95,99,131). Knowledge of the maximum leaf speed is taken advantage of to maximize the gap between the opposing pair of leaves and, therefore, to minimize the treatment time. The number of leaves participating in the delivery of a beam depends on the projected size of the target volume. The data describing leaf trajectories, produced by the leaf sequence-generation process, are in the form of a table of positions of leaves versus the corresponding MUs (depicted graphically in Fig. 9.15).
Leaf Sequence Generation—Step-and-Shoot and Multisegment Techniques
With the step-and-shoot technique (as well as for multisegment technique) the fixed-gantry radiation beam is composed of multiple static MLC segments, with each segment having its own aperture shape and weight or monitor (MU) settings. The leaf sequence-generation algorithms take the optimized intensity pattern as the input and decompose it into multiple segments, each to be shaped as an aperture formed by the MLC. Fluence intensity throughout each MLC segment is relatively uniform. The summation of all static segments yields the required intensity-modulated dose distributions. Ideally, the segments are sorted to minimize the MLC leaf travel time between the segments. Note that such sorting is neither necessary nor possible for the sliding-window technique.
The first step of the leaf sequence-generation process is the discretization of the continuous intensity distribution into a limited number of intensity levels. These intensity levels then are converted into leaf sequences using one of several methods described in the literature. Bortfeld et al. (10), for example, have proposed a method in which each row of intensity is handled separately, similar to the sliding-window algorithm. The advantage is that the total number of MUs is small but at the cost of possibly large numbers of segments. Xia and Verhey (145) proposed the so-called areal algorithm. Instead of dividing the intensities into levels of equal steps, they divided them into levels in powers of 2 to reduce the number of steps and to gain efficiency. Wu et al. (144) proposed a technique called the K-means clustering in which the intensity levels are grouped together based on their values and the user-specified error tolerance levels. The intensity levels are not equally spaced and can be arbitrary.
Unlike the sliding-window algorithm, the maximum leaf speed is not important for the step-and-shoot and multisegment techniques. Similarly, while the number of segments is not an issue for the sliding-window techniques, it could affect the step-and-shoot delivery efficiency significantly. For the former, the only penalty of the large number of segments is the size of computer storage, whereas for the latter it leads to inefficiency because the beam is off during the transition between the segments. Furthermore, for some linear accelerators, there is an overhead time associated with each segment.
Que (109) compared several step-and-shoot algorithms and found that the algorithm used by Xia and Verhey (145) frequently, but not always, produces the least number of segments. Other investigators have reported methods to minimize the number of segments as well. The algorithm of Dai and Zhu (38) checks numerous candidates for each segment, and the candidate that would result in a residual intensity matrix with the least complexity selected. If more than one candidate exists with the same complexity, the one with the largest size is chosen.
P.252

Langer et al. (70) reported a technique based on the integer programming that can minimize the number of segments under the constraints that the MUs do not exceed a certain limit. It was found that the technique produces considerably fewer segments than the algorithms of Bortfeld et al. (12) and Xia and Verhey (145) for the same or fewer MUs.
Monitor Units of IMRT Beams
Based on methods similar to those previously described, software systems have been developed to convert intensity distributions to leaf trajectories. The input to this software is the intensity distribution for each field in terms of MUs or, to be more precise, “effective” MUs. Effective MUs are fractions of MUs transmitted through the intensity-modulation or compensation device. The intensity distribution-to-leaf trajectory conversion software not only produces trajectories but also computes actual MU settings for each beam as a natural byproduct of the conversion process. Trajectories of leaves and the MUs for each beam are transmitted to the computer-controlled radiation treatment machine for dosimetric verification and the delivery of treatment.
It is important to note that the relationship between the prescribed dose and MUs required for delivering each of the intensity-modulated beams is highly complex and not obvious. There is no practical way to calculate MUs by hand as is done for traditional treatments as an independent check of the predicted MU values. To ensure patient safety and to satisfy the requirements of the independent check, some systems have implemented independent software for a second MU calculation. Others have adopted the policy to measure the dose or dose distribution for each of the beams before the first treatment.

0 Comments:

Post a Comment

<< Home