Iterative Reconstruction

  1. Iterative methods (IR) - Is an alternative process to SPECT reconstruction that continues to gain popularity in Nuclear Medicine and has advantages over the more common approach, FBP. The area where IR is currently being utilized the most is in PET, however, its application can be seen in general nuclear medicine SPECT procedures
  2. What appears to be wrong with FBP technology?
    1. Radon's calculations assume that all 2D projections correctly represent radiopharmaceutical distribution. At issue is that it doesn't, for the following reasons
      1. Statistical variability occurs with low count images in each 2D projection
      2. Patient specific attenuation is not taken into account, resulting in variation between each 2D projection
      3. The imaging physics with these two variables are difficult to incorporate into the FBP
  3. This reconstruction method employees several techniques to reconstruct 3D data
    1. Attenuation correction map can be employed in the IR process
    2. Estimation of radiopharmaceutical distribution occurs
      1. Uniform distribution of activity may be applied or
      2. Backprojection measurements from an applied matrix can be utilized
    3. Employs imaging physics
      1. When compared with FBP there is improved distribution of data, hence quantitative analysis of SPECT data is more accurate
      2. There is NO star defect
      3. The role of IR gives the best estimate of how many counts there are in a pixel/voxel (and that is its goal - to determine how many counts are in a voxel)

      4. http://www.sciencedirect.com/science/article/pii/S0895611100000598

      5. For each iteration the estimated counts gets closer to the true counts. The key is to stop the iterations at some point where you believe you have the best resolution. Too many iterations generates too much noise, as seen above
    4. Iterative methods requires intensive computer processing that is available with current CPU technology. Years ago iterative reconstruction was not practical because CPU processing was too slow
    5. The following example is a "simple explanation" of the Iterative reconstruction process. Note that the diagram is numbered to match the explanation below
    6. Iterative Reconstruction Process

      1. Data is acquired in a set of 2D matrices (camera stops)
      2. Prior to IR the original matrix maybe filtered (smoothed) to reduces statistical noise
      3. (1) The Initial Guess comes from
        1. The expected biodistribution of the radio-tracer
        2. This may be obtained from one of several methods: (a) assume a uniform distribution of activity or (2) apply data from all the 2D Backprojection images
      4. (2) A forward projection step then generates an Estimated 3D Matrix which is influenced by the patient's attenuation map and extra data usually specific to imaging physics
      5. (3) Then all 2D Estimated Projections are compared to the 2D Measured Projections. This identifies a variation between the two
      6. (4) A 3D Error Matrix is then used to updates step 2
      7. (2) Estimated 3D Matrix is modified creating a new Estimated 3D Matrix
      8. The cycle then starts over again hence the term iterative
      9. The user defines the amount of iterations necessary to complete image reconstruction
      10. Each iteration brings a "truer" count to each voxel
      11. In the end, the estimated 3D matrix should closely match the "true count" distribution in the 3D structure
    7. Here is the more involved process in IR and it is a comprehensive description of Maximum-Likelihood (MLEM)

      The PDM in Iterative Reconstruction

    8. There are two basic levels in the IR process
      1. The initial step is the development of the Probability Density Matrix (PDM)
        1. The Object - From the above diagram, an object contains radioactivity distributed 3D space. Technically you breakdown that activity of counts into a 3D voxel format
        2. The Image - each pixel, from the 2D acquisition contains a statistical sampling of activity, a probability value of radioactive distribution
        3. This probability function is a result of attenuation maps and/or spatial resolution (imaging physics) of the camera
      2. The next step involves direct application of the iterative process
        1. Initially the acquired data is reconstructed by applying the PDM to each slice (2D image) in the SPECT rotation
        2. It is then Backprojection with specific mathematical computation applied to each pixel
        3. At the end of the first iteration the acquired data is reprocessed and the cycle starts all over again
        4. The iterative process continues until the voxels from the acquired image closely represent the activity distribution in the acquired data

        Iterative Process

      3. Key points to observe in the diagram as it relates to the IR process
        1. Initially the camera acquires a set of 2D images, usually in a 64 or 128 matrix. IR reconstruction is then initiated
        2. To better understand the process only one 2D slice will be discussed and the matrix size that is 4 x 4
        3. Observe two changes in the matrix during each phase of the iterative process. This can be seen by (1) a change in the gray scale and (2) a correlated numerical values corresponding to the gray scale in each pixel. The darker the gray the greater the number counts
        4. Finally, one should note the matrix labeled "expected results." This represents the "true" counts in matrix which is what the iterative process is trying to attain
      4. Iterative Process
        1. The acquired data is first sent through the probability density matrix (PDM) creating a modified matrix that is defined as Estimated Projection Set (EPS1)
        2. The EPS1 is subtracted or divided by the original matrix and creates an Error Projection Set (ErPS)
        3. The ErPS is then backprojected while an algebraic form of the PDM is applied creating the Error Reconstruction Set (ErCon)
        4. ErCon is then multiplied by scale factor that is between 0 - 1, fraction. This causes the oscillation in the iterative process to be reduced
        5. The final step in the process is to take the ErCon frame and multiply or add its value to EPS1 which creates the new EPS2 and this becomes the new estimated 3D matrix
        6. The cycle then starts all over again with the reapplication of the PDM
        7. This repetition continues until the desired results are achieved, where there is a close match between acquired counts and the actual 3D volume of data
      5. Types of Iterative algorithm
        1. Expectation Maximization (EM) is the overall terminology used to define IR programs
          1. This example is the initial one explained in today's lecture
          2. Forward projection step is the expected operation and the comparison step is the maximization operation
          3. Then the Backprojection step creates the error matrix and this updates the estimated 3D matrix
        2. Maximum-Likelihood (MLEM) - move involved IR explanation
          1. This algorithm compares the estimated to the measured activity
          2. The process maximizes the statistical quantity, referred to as likelihood, between the estimated and the expected
          3. The closer the value is to 1 the better it represents true counts
          4. End results - to generate the greatest probability of measured activity that truly represents the actual counts
          5. Evaluates and processes ALL projections in each iteration
          6. It may take between 50 to 100 iterations to complete the reconstruction process (this is time consuming)
        3. Ordered-Subset (OSEM)
          1. To reduce the processing time the projections are broken down into subsets and processed at that level
          2. As an example if 128 projections are acquired they might be broken down into 8 subsets, each subset containing 16 projection
          3. Each estimated subset is then compared to its measured (acquired) counterpart (subset) which then updates it to a new estimated subset
          4. Each subset goes through multiple iterations
          5. In this example, 8 subsets are handled individually, compared and updated, then it generates the new combination of 2D images, creating the new estimated 3D matrix
          6. Usually no more than 10 iterations occur with each subset
        4. A problem with noise
          1. For each iteration noise is also generated. This is referred to as "noise breakup phenomenon"
          2. Hence, there is a compromise between an image that shows the best detail and has the least amount of noise
          3. If the iterative process continued excessively, then too much noise would be see in the image, reducing image contrast
          4. Post-filtering with a smoothing filter may be considered that would suppress the noise created in the IR process
      6. What needs to be appreciated is that for each iteration you get a closer estimate to the true count distribution in the 3D matrix. On the negative side you generate more noise. Why more noise?

      7. FBP vs. Iterative

      8. Several improvements occur with the iterative process: loss of the star defect, improved resolution, better image contrast, and images that can be better quantified
      9. Increasing the amount of Iterations

      10. The above example shows how increasing the amount of iterations improves image quality by better estimating the true count distribution in the SPECT procedure. When should the iterations be stopped?
      11. Suggested reading - Imaging reconstruction - A Tutorial

    Return to the Beginning of the Document
    Return to the Table of Content

    3/21