Sinogram and Imaging Formats

    Sinogram and Its Location
    Figure 1
  1. Sinogram - The mode of acquisition formation
    1. From an annihilation event, an Line Of Response (LOR) is generated as noted in the example
    2. In our example there a 3 LORs: red, green, and blue
    3. When a coincident event is recorded the LOR defines the path to which it occurred, however, there is one minor problem. The specific location of the event within a three dimensional space cannot be directly located, but a line can be drawn indicating the path in which the annihilation occurred
    4. Therefore, in actuality, each recorded LOR presence two photon events that have occurred in a straight line. This LOR is placed into a a series of voxels, and stored as a sinogram
      1. When two events at opposite direction are detected an LOR is created
      2. They must occur as two separate events that strike at opposite ends, within the timing window, and fall within the PHA window
      3. If the two blocks pick up the set of event within the same ring, at opposite ends then this is called a direct-plane event
      4. However, if the blocks are in adjacent rings then it is a cross-plane event
      5. Memory location generates a prompt sinogram or prompt event
      6. A prompt sinogram can either a true, random, or scatter
    5. The presence of many LORs or prompts defines image density
    6. Let us take a closer look at how the coordinates of a sinogram are determined however,
      1. As previously stated, the specific location of the event cannot be determined
      2. What can be defined is distance from the center (r) and the angle from the the center Φ
        1. Hence the LOR's location is displayed as (r,Φ) this does have similarities to (x,y) axis on a 2D graph
        2. LOR's are projected on to 2-D histograms which appears in a 3-D volume (voxel)
    7. Refer to the three colored LORs, above
      1. Color is used to help you associate where the event occurred and were its location would be on a sinogram
      2. In the y-axis,Φ, the angle of the event occurs between 0 to 179 degrees
      3. In the x-axis, r, defines the distance from the center
    8. Now let us determine the "location" of the three LORs, above, and plot the r and Φcoordinates
      1. The red LOR is left of center at about 45 degrees
      2. The green LOR almost goes through the center and has about a 5 degree angle
      3. The blue LOR is right of center and has about a 140 degree angle
      4. The actual location of the annihilation event cannot be determined even thought the diagram indicates its exact location
    9. Sinograms and planes have many angles of coincident


      Figure 2 (click to enlarge image)

      1. A Blank Scan in PET(More on Blank Scan) is a QC procedure that evaluates all angles of coincidences and displays a series of sinograms as seen above. Click on the image for a larger view. Do you see anything wrong?
      2. The QC display of the blank scan shows a series all possible angles of coincidence (direct and cross-plane) via criss cross sets of lines (sinograms). If there is a PMT or block failure, then a lack of LORs (activity) are noted.
      3. In QC - Blank scan is the "uniformity flood" for a PET scanner
      4. A Normalization scan tunes all the detectors and generates uniformity correction map. See Normalization Scan for more information
      5. Sinograms display hot spots
        Figure 3

      6. The image above shows LORs coming from transaxial project and what the sinogram would look like:
        1. Red LORs is the smaller object with less activity
        2. Gray LORs is the larger object with greater activity
        3. The image on the right displays the sinograms is the sinogram data from one bed position
          Interaction between Blocks 21 and 9 in a PET gantry3
          Figure 4

    10. Blank scan is completed daily prior to patient scanning. 68Ge rod source is placed into the center of the FOV and counts are acquired. Sinograms are generated in every possible angle(direct and cross-plane events). Once acquired, they are displays for analysis. Any dysfunction within the detectors of any given block will display a lack of a sinogram or lack of "uniformity." In essences the axial FOV is testing for system uniformity. Comments on the above images
      1. Left image shows the extreme angles of coincidence that are available in the axial FOV with everything within that "circle" is concidered the FOV
      2. In the center of the FOV is a thick gray line that connect blocks 9 and 21
      3. The image on the left shows that sinogram relationship where the detectors within these two blocks communicate and display and equal distribtuion of sinograms
      4. Each detector within these blocks detects the coincident event, from direct and cross plane events, generating LORs and are stored/displayed in a sinogram matrix
      5. This concept is not an easy ` Acceptable and Unacceptable Blank Scan Results'
        Figure 5

      6. Here are two examples of the raw data that displays sinograms that is used to analyze coincidence events from Blank Scan. Which on above shows failure in in coincidence detection?
      7. End result of a blank scan shows the raw data of a many different sinograms that is displayed as series images. If there are 64 possible planes in the PET's FOV then there will be 64 sets of sinogram images generated for display and analysis (here is an example of 35 of the 64). What is the difference between PET and gamma camera QC? Click here
      8. Failure to image sinograms means some type of failure with the block(s) and/or PMT(s) have occurred .... Time to call service dude
      9.  

        Minor defect on left with large defect on right
        Figure 6

      10. Second example shows fault with detector (PMT) or crystal followed by a defective block - notice the difference - ref
  2. Putting it all together - Annihilation event to Sinogram
  3. Process of recording an annihilation event
    Figure 7

    1. So lets follow the an annihilation event that is recorded as a coincidence event
    2. Each PMTs from opposite ends pick up the luminescence create by the incoming photon
    3. Low level discriminator will accept these pulses if they are height enough (based on light scintillation generated from the energy photon)
    4. The pulses proceeds to a timing signal that then continues to
    5. The coincidence timing circuit will accept the two events if it fall within the appropriate time (5 - 12 nsec)
    6. PHA evaluates the energy pulse, much like gamma camera and if the pulse falls with the energy window it is accepted
    7. LOR is generated and a digital signature then becomes a prompt sinogram
    8. A prompt sinogram is either a true event or it could be random or scatter event
    9. The delayed sinogram has a longer timing window (Process is identified below). These longer recorded events are then subtract as random/scatter events from the prompt sinogram leaving true events
    10. Other corrections include dead time and attenuation
    11. The image above shows the correction for random events
    12. The CRT display will generate sinograms within its 3D format ( SPECT display the same concept?)
  4. What is the difference between SPECT and PET projections/rays?
  5. Slices and Angles
    Figure 8
    1. A single SPECT projection shows information at an angle across all slices in a 3-D volume (image)
    2. PET's sinogram slice presences data acquired from a slice across all angles within a given 3-D volume (image)
  6. Modes of acquisition
    1. Static
      1. Displays activity located in tissue at a specific time
      2. Temporal resolution is not a factor
      3. Each acquired frame of data represents a set of sinograms during a given acquisition
      4. Consider how FDG locks into issue
      5. Estimation of radiopharmaceutical uptake can be quantified (SUV)
      6. Spatial resolution is usually very important
      7. Example - Brain scan usually acquired in one bed position, however some literature indicates that it can be done in two. Why might a two bed acquisition be a problem in a brain scan? Furthermore, at MCV there is an increase in the acquisition time. (What is the point of doing this?)
    2. Dynamic
      1. Identifies the change of activity, within the tissue, over time
      2. Temporal resolution becomes an important factor
      3. There is a series of frames, of which, each one has a set of sinograms, followed by the next set, etc. Identifying a dynamic process. Similar to a flow study with a gamma camera
      4. Consider 15O distribution in a brain scan, it has a very short half-life, and requires analysis of oxygen distribution over time. ~Radiopharmaceutical distribution evaluates brain function
      5. Example of 11C raclopride uptake in the brain over a 95 minutes is seen above, but the link appears to no longer works http://neuroimage.usc.edu/images/Racloco.mpeg. This radiopharmaceutical attaches to the D2 receptors looking for upregulation, defining the increase or decrease of tracer uptake over time
      6. How might this be applied in cardiac imaging?
    3. Gating (Cardiac)
      1. Is another aspect of dynamic acquisition
      2. Like routine nuclear cardiology it looks at cardiac contraction
      3. The best format for gating is in a list mode. It allows the user to select a set of specific R to R waves, which should improves temporal resolution
      Whole Body Imaging
      Figure 9
    4. Whole body format - eyes to thighs or head to toe*
      1. The scanner takes multiple axial acquisitions which is fused together to form a whole body projection
      2. In this example, an axial FOV is 30 centimeter
      3. When enough counts are acquired the table moves the patient to the next bed position
      4. Technically, if patient is 68 inches in length it would require 6 bed positions to scan the entire body (2.54 x 68 = 173cm/30cm = 5.8 bed positions
      5. Important - The FOV losses sensitivity as in collects data in the peripheral directions of the FOV , therefore each image within the acquisition overlaps, between 3 to 5 cm
      6. However, in reality a true whole body scan (head to toe) is completed by (1) Starting at then top of the head and scanning to mid-thigh (2) then the patient is rotated 180 degrees and scanned from the foot to the pelvis
      7. Each image takes between 2.5 to 5 minutes (LYSO vs BGO) to acquire, pending the amount of activity administered and the type crystal
      8. Consider the concept of whole body: eyes to thighs or head to toe. Why do we have two types of whole body procedures? What might the criteria for (1) eyes to thighs and (2) head to toe acquisition?

      9. 2D vs. 3D Imaging
        Figure 10
  7. With and without "Collimation"
    1. 2-D Mode
      1. When thinking about how collimation might help improve PET acquisition it can be done in what is referred to as 2-D Mode. Some imaging systems are equipped with tungsten "septa" located between each block from a axial projection (z-axis) of the gantry (see above)
      2. Physically these (annular) septa are 7 - 10 cm in length and 1 - 5 mm in thickness
      3. They can be fixed, but usually retract into and out of the rings as needed
      4. Key role is to prevent excessive cross over of events that prevents scatter
      5. In the diagram above, A, shows an event that occurs in a direct plane, where C represents a cross plane event
      6. In B no event is detected because the annular absorbs the photon's energy
      7. Adjacent rings also have coincident circuitry in order to account for a cross plane event. Usually this feature can cross over around 5 to 6 rings
      8. In 2D imaging this type of acquisition improves system resolution and reduces system sensitivity by decreasing scatter
      9. Scatter from prompt events may be as much as 30 - 40% and when using annular scatter may be reduced to a 10-15% margin
    2. 3-D Mode
      1. Here annular do not play their "extended" role, therefore photons can be recorded from any angle which increases sensitivity, but at a price of increased scatter
      2. One may argue that excessive scatter will reduce image contrast and effect resolution
      3. 3D is usually done with brain imaging and with systems that have Time of Flight (TOF)
      4. Interesting point - I have read several articles that favor 3D for better lesion detection, "This study showed that, given a patient's size and scanner type, the fully 3D acquisition mode allowed better or equivalent detection performance than the 2D mode for an injected dose corresponding to the peak 3D NEC rate (noise equivalent count)." - ref

      @D and 3D - Michelograms
      Figure 11

    3. Michelograms displaying 2D and 3D acquisitions
      1. This gives us yet another concept of what the axial view can acquire based on its "dimensional" format which depends on the amount of rings composing the FOV
      2. In the strictest sense, Image A - shows that only direct coincidence where the same ring at the opposite block acquires the coincidence
      3. Image B - Is in 2D format and can communicate between 7 difference rings
      4. Image is 3D there are no annular it appears that all angles of coincidence are available - increasing counts
      5. Consider the display above and ask the following questions:
        1. Which acquisition mode give you better resolution?
        2. ... increase in counts?
        3. ...reduces scatter or random?
        4. The answer does not appear to be as an easy one

          Scatter Vs. Random Events
          Figure 12
  8. Where does scatter and random events come from and why are they recorded as a coincident events?
    1. Review of the diagram above shows how a scatter event can be recorded. From the initial annihilation the lower left photon travels in its true direction and is recorded. However, the opposing photon is defected, changing its trajectory via Compton scatter causing the photon to go in the wrong direction. Drawing the 180 degree LOR at the point where the two photons hit the crystal generates an LOR in the wrong location. For this to happen both events must fall within the timing window and be accepted by the PHA
    2. In a random event the diagram shows two different annihilation events that occur in different areas of the body. In the first annihilation event, the one photon is recorded while the opposing one leaves the system (as indicated by the arrow). With the second photon, the same thing happens, where only the one photon is recorded, while the other escape detection. Hence the wrong LOR is defined if these events fall within an appropriate timing window and are accepted by the PHA
    3. Effects in the variation of counts and the amount of activity
      Figure 13

    4. How does the amount of activity effect different types of detection events? The above graph represents how increased levels of activity effect: singles, random, true, and scatter events
    5. In addition the longer the timing window the greater the occurrence of scatter and randomness
  9. Other factors that affect acquisition
    1. Normalization
      1. What causes non-uniformity? The cause is a result of several factors (much like a regular gamma camera): crystal structure, electronic configuration, PMT/HV. Think about it, there are some 10k elements attached to several hundred PMTs which operate "independently." For this reason non-uniformity within an axial acquisition occurs and must be corrected! To generated a correction map, a PET scanner, must acquire a normalization scan

      2. Figure 14

      3. Uniformity can be initiated with the acquisition of a coincidence source placed at the center of the PET's FOV. Data is then collected in 2D and/or 3D formats where normalization factors are calculated for both imaging modes. In the above example a cylinder contains a 68Ge, however, many PET units have their own 68Ge rod sources housed within the PET scanner and "pop" out when a Blank Scan or Normalization scan is required
      4. Normalization of LORs
        Figure 15
      5. Normalization Procedure
        1. Normalization factor (Fi) is determined by first looking at all the LORs in a given plane and finding its average (Amean). Then each individual LOR in that plane (Ai) is assessed and ratios is calculated for every LOR, that results in generating "normalized" homogenous LORs through the entire plane
        2. Example of the number of plains - a BGO system with 32 rings, with 63 2D imaging planes - ref

        3. Normalization Factor
          Figure 16
        4. Application of the normalization factor - When acquiring an image the actual 'counts,' Ci,are multiplied by Fi to determine the normalized counts for each LOR which generates Cnomri
        5. Can you relate this to SPECT and its uniformity correction map?
      6. Normalization scan takes 6 or more hours to complete pending the strength of the 68Ge source. Why does it take so long to complete a normalization procedure? Relate this to a 120 million count correction matrix vs. 63 data sets in PET
      7. This process must be done at least once a month and maybe recommended weekly. Dependent on the manufacturer
      8. Normalization scan creates a detector identification map which correct sinogram to its appropriate location
    2. Photon Attenuation
      1. In an ideal environment the body habitus would have uniform density. If this was the case then attenuation of the 511 keV gammas would be nothing more than a mathematical calculation to determine LOR attenuation - remember the Chang filter in SPECT?
      2. However, as you well know, this is not the case (accept the brain), therefore pending the area under acquisition multiple densities (bone, tissue, air) will effect the amount of photons being recorded
      3. Likewise, remember that photons coming from the center will also have greater attenuation. In comparison, more photons will be recorded as annihilation occurs closer to the surface
        Compare the images and explain the differences
        Figure 17
        http://www.med-ed.virginia.edu/courses/rad/petct/Interpretation.html
      4. The above example shows the difference between the attenuated and non-attenuated images
      5. Therefore, attenuation (μ) is independent of the photon's origin, but completely dependent within the internal structures of the body habitus
      6. When density is uniform the following formula can be used:
        1. Attenuation when there is Uniformity
          Figure 18
        2. Where P is the probability of detecting coincidence based on the attenuation
        3. The formula presences two different tissues with two different : values, :a and :b. Since both a and b have the same density the probability of coincidence replaced with μD
        4. Can you give an example where this mathematical calculation can be used in a PET scan?
      7. Applications of attenuation correction with non-uniformity becomes significantly more difficult since there are multiple density changes occurring internally. In addition, the specific location of the type of density is unknown. A formula representing this type of attenuation:
          Formuation used for Non-uniformity
          Figure19
        1. There are two parts to this this problem - first is P, the probability of detecting of coincidence changes in different parts of the body where variations in density occur (example - chest/abdomen)
        2. The second component relates to G and has (n) number of organs (n) with varying linear attenuation and thickness (μiDi)
        3. To apply the above formula and mathematically determine what the different attenuation factors would be is an impossible task
        4. Hence the need for a external radiation source - radioactive rod source or CT is the answer
        5. CT application is the method of choice used to determine attenuation coefficients. Not only is the process a lot faster than using a rod source, but it also allows for fusing anatomy and physiology
          Application of Attenuation Correction Using CT
          Figure 20
      8. Application of CT for attenuation
        1. Daily QC is done on a CT unit where linear attenuation is determined by scanning air. This is refereed to as "cutting the air"
        2. In PET, a daily Blank scan is collected with a uniform source to confirm system uniformity with all LORs
        3. Next a transmission scan is done on in the area of interest, specifically with the patient
        4. This is followed by the PET acquisition
        5. Then a ratio (relationship) is established between the LORs and CT linear attenuation value
        6. If linear attenuation drops in an area being acquired this means that there is increased density. This translates to adding counts to the PET data
        7. Alternatively, if the attenuation drops, then counts are removed from the related LORs
      9. Additional comments on PET/CT attenuation correction (detailed lecture on this topic, link here)
        1. Misalignment can be a problem. As an example 2 cm variation may cause up to a 30% variation with the accumulated activity within the structure of interest
        2. Segmentation can be applied, which has a known : values generated from the acquired CT data. Similarities between tissue types are identified by its attenuation coefficient and given a : values which related to know body type. These known : values include: bone, lung and soft tissues
        3. Scaling is process that has to occur when CT attenuation is applied. CT x-rays vary in kVp output and, energy is a lot lower than 511 keV photon. Therefore, via computer processing the x-rays are "scaled up" to 511 keV photon to assure correct translation with the attenuation map
        4. Breathing in PET imaging causes distortion in the base of the lung. In CT, usually the patient should hold his/her breath since it reduces the the amount of distortion in the CT image. However when attenuation correction is applied a cold area appears as a cup defects since the increased lung density in CT removes photons to the moving PET lung base
        5. Contrast and metal implants cause excess in CT attenuation that can adds activity into the false positive PET scan
        6. More on this in another lecture
    3. Random coincidence
        Calculate the Rate of Random Events
        Figure 21
      1. Formula shows the R - rate of randomness
      2. The technologist can increase the level of random coincidence in a PET scan by opening up the % window, extending the timing window, and/or increase the activity administered to the patient
      3. In turn, increased randomness may display artifacts by reducing image contrast
      4. The rate of randomness can be determined with the formula above where J is timing window in nanoseconds and C1/C2 are the count rates for each detector to detect a single event in the coincident circuit
      5. The difference between true and random counts
        1. Random events increase with the square of the dose administered
        2. True events increase linearly with the administered dose
      6. Two methods to reduce random photon detection
        1. This simple approach is to reduce your timing window

        2. Figure 22
        3. Or - Using an LYSO crystal, set two timing windows, one at 6ns and the other at 54ns
        4. The 6ns window has both random and true prompts
        5. Because of the length of time in the 54 ns window only random counts are counted
        6. Determine the amount of counts per nsecond for each window
        7. By subtracting the random prompts from the true and random prompts results in the detection of true prompts
    4. Scatter counts recorded as coincidence
      1. In routine nuclear medicine when scatter occurs, most of it can be removed. That is the roll of the LLD setting the PHA
      2. In PET, when Compton occurs, these highly energized gammas may have energy that prevents them from separate it from true count, pending the applied % window. This will result in scatter counts being accepted
      3. Just like random events, scatter reduces contrast, and degrading image quality
      4. The following situation will increase scatter during PET acquisition
        1. Increased patient density
        2. Annihilation events that occur deep within body
        3. Increasing the width of the % window
        4. Increasing the administered dose - scatter increases linear fashion
      5. Remember scatter is detected because deflected photons and non-deflected gammas arrive within the timing window causing the LOR to be incorrect orientation of the annihilation event
      6. So how can you reduce scatter?
        1. Adjusting the PHA - reduce the % window
        2. Imaging in 2-D, instead of 3-D
        3. It has been suggested that acquiring count outside the FOV and subtracting them from counts acquired inside the FOV will give you "true counts." The problem with this approach is the assumption that scatter is uniform throughout the FOV
    5. Dead Time (J)
      1. This issue happens with any scintillating device.
      2. Consider: In order for scintillation event to recorded it must: be absorbed by the crystal, scintillate, produce light, be detected by the photocathode, amplified by the PMT, and have its location identified
      3. Interruption of that process is the effects of dead time
      4. Dead time loss occurs when a second coincident event occurs at the same location, causing it not to be detection
      5. To reduce this problem, faster electronics and shorter crystal decay times have been developed
      6. Pulse pile up can occur when
        1. A PET system detect 2 photons (in a coincident fashion) however, it is seen as 1.022 MeV (coincidence summing), causing the PHA to rejected the pulse
        2. In another scenario, 2 scatter photons can be detected by coincidence and have the sum total that falls within the PHA window, hence generating an incorrect LOR.
        3. Pulse pile up occurs when excessive levels of activity are administered to the patient. The end result is image distortion
      7. Dead time loss can be reduced with the application of buffers within the electronics, pulse pileup rejection circuits , faster electronics, and crystals with shorter decay times
    6. Radial Elongation
      Figure 23
    7. Radial Elongation (parallax error/radial astigmatism)
      1. As the words imply distortion occurs via the elongation of actual data
      2. This becomes more prevalent in the peripheral regions of the FOV
      3. From the above diagram, a coincident event is recorded at the edge of the detector block or element where the photons arrive at a tangent to the opposing crystals
      4. The distance between the actual location and its distortion causes "blurring" of the image by essentially elongating the data. As noted, 2 blocks above are involved in this detection, where essentially there should be just one
      5. This elongation effect increases when
        1. The FOV is smaller
        2. When it occurs further away from the center
        3. In PET systems that have thicker crystals

Return to the Beginning of the Document
Next Lecture - Time of Flight
Return to the Table of Content

2/23

Routine Quality Control of Clinical Nuclear Medicine Instrumentation: A Brief Review by P Zanzonico, SNM 2008
A lesion detection observer study comparing 2-dimensional versus fully 3-dimensional whole-body PET imaging protocols, by C lartizien, et al JNM 2004
Performance evaluation of a large axial field-of-view PET scanner: SET-2400W by T.Fujiwara, et al. Annals of Nuclear Medicine 1997