Optical coherence topography can image




















Toth ; Sina Farsiu ; Prithvi Mruthyunjaya. Alerts User Alerts. You will receive an email whenever this article is corrected, updated, or cited in the literature. You can manage this and all other alerts in My Account.

This feature is available to authenticated users only. Get Citation Citation. Get Permissions. The choroid is composed of three distinct layers: the innermost choriocapillaris, a middle layer of small vessels, and an outer layer of nonfenestrated large caliber vessels adjacent to the sclera.

Even spectral domain-optical coherence tomography SD-OCT nm wavelength machines, while excellent at providing retinal detail, are also limited in their ability to resolve choroidal detail for several reasons: 1 poor signal penetration through the retinal pigment epithelium RPE and choroid, 2 light beam defocus at the level of the choroid relative to the retina, 3 inadequate contrast between structures of critical interest, and 4 nonideal lateral resolution of the scan.

Placing an OCT device closer to the eye results in an inverted image of the retina and places the outer choroid in closer proximity to the zero-delay line, producing improved choroidal visualization because the choroid represents a lower frequency portion on the Fourier-transformed interferogram compared with standard OCT. However, there is currently no well-established metric to objectively assess how clearly the details of the outer choroid are imaged on SD-OCT and no studies have quantitatively validated the subjective perception that inverted imaging improves visualization of the CSJ and outer choroidal vessels OCV.

Additionally, there are only limited data on how image inversion affects measurements of choroidal thickness among and between various commercially available SD-OCT units. The goal of this study was three-fold: 1 to determine whether inverted imaging significantly alters choroidal thickness measurements compared with upright images; 2 to define outer choroidal contrast as a metric of choroidal imaging detail and to determine whether image inversion improves outer choroidal contrast; and 3 to determine whether inverted SD-OCT imaging improves subjective visualization of CSJ continuity and large OCV.

A prospective, comparative, consecutive case series was performed including 42 eyes of 21 healthy volunteer subjects without known retinal or choroid disease. This research followed the tenets of the Declaration of Helsinki and was approved by the Institutional Review Board. Informed consent was obtained from the subjects after explanation of the nature and possible consequences of the study. Each subject had an ophthalmologic exam including fundoscopy examinations prior to imaging.

Exclusion criteria for normal subjects included known or discovered retinal disease e. We chose standard SD-OCT imaging parameters currently in clinical use to capture fovea-centered line scans. No research modifications were made to the SD-OCT capture protocols except to use the same settings on all eyes of all patients. For Bioptigen nm, Bioptigen Inc. For Bioptigen and Spectralis images, we captured A-scans per B-scan with 40 averaged B-scans per image.

There were fewer averaged images on Cirrus because the capture software limited the summed high-resolution scan to a maximum of 20 averaged B-scans, in contrast to the standard number of images obtained on the other two machines. Manually inverted images were obtained by moving the device closer to the eye to bring the outer choroid into focus at the top of the image, thus placing the zero-delay line closer to the RPE rather than the inner retina Fig.

We obtained one additional image set using the Spectralis enhanced depth imaging EDI mode, which is a preset, software-driven algorithm that places the RPE near the zero-delay line while producing an upright enhanced choroidal image without the need to manually push the device closer to the eye Fig. Figure 1. View Original Download Slide.

For a given eye, we obtained upright and inverted imaging on three machines and the Spectralis EDI mode, for a total of seven imaging modalities per eye. In both Bioptigen and Spectralis machines, obtaining inverted images did not alter overall detail of the neurosensory retina Figs. In the Cirrus machine, inverting the images resulted in low resolution, highly pixelated images due to a default software mechanism that was put in place to prevent the operator from accidentally obtaining an inverted image Fig.

Segmentation, Choroidal, and Retinal Thickness Measurements. The inner retinal surface was segmented by drawing a line at the internal limiting membrane—vitreous junction. We examined a single-point foveal retinal thickness measurement at the center of the fovea, the lowest point on the nerve fiber layer—vitreous boundary, to determine whether inversion creates distortion that could cause differences in observed choroidal thickness.

Figure 2. Choroidal and retinal segmentation. Average choroidal thickness and point choroidal thickness measurements are depicted. Red asterisk denotes the foveal center. ILM: internal limiting membrane. For choroidal thickness comparisons, average subfoveal choroidal thickness was calculated across a 4-mm choroidal segment centered at the fovea Fig. Axial pixel pitch for the Bioptigen, Cirrus, and Spectralis machines were 3.

A GPV histogram is a plot of the frequency of occurrence of each GPV, where the horizontal axis represents the grayscale value from 0 to i. The shape of the histogram of a low-contrast region is very similar to the detector noise distribution often modeled as Gaussian , and its width corresponds to the noise variance green and red curves in Fig.

Existence of high-contrast anatomical features results in a wider GPV histogram blue curve in Fig. The higher the contrast of the image, the closer the hyperreflective and hyporeflective GPVs are to and zero, respectively. Thus, the corresponding histogram of an image with high-contrast choroidal vessels is wider than that of a low-contrast image histograms in Figs. We utilize FWHM, which is the width of the histogram at half of its maximum peak, to quantify this phenomenon.

We then numerically compare the accuracy of this method to the more computationally-intensive, yet more rigorous, direct simulation approach. In this section we derive an approximate relationship between the scattered field resulting from normally incident plane wave illumination, and the scattered field resulting from a focussed illumination beam. This relationship is derived by first considering the analytic solutions for both scenarios when imaging a single point scatterer.

However, the derived relationship will later be applied to synthesize the focussed illumination scenario from the plane wave scenario, using the output of a full-wave 3D PSTD simulation. We begin by describing the problem for a monochromatic source. We assume a low Numerical Aperture NA objective and that depolarization due to focussing is negligible, allowing for a scalar description to suffice in this scenario.

Consider first Fig. This expression is derived from Eq. In Fig. Here we assume that, in terms of the incident field arriving at the point scatterer, the only change compared to scenario 1 is a phase shift. Regarding the magnitude of the field arriving at the point scatterer, this is assumed to be unaltered relative to the normally incident plane wave case. It can be seen from Eq.

Importantly, these phase factors in Eq. While Eqs. To do this with confidence, we must carefully consider the validity of each assumption made when deriving Eq. In highly scattering samples the incident planar wavefront may become significantly perturbed. This alone does not prohibit the method from being applied to highly scattering samples as it would if we were relying on a Born-like approximation. Rather, we only require the phase change at the region of interest to be correlated with the expected phase ramp when tilting the angle of an incident plane wave.

It is thus plausible that the assumed phase change may still be acceptable in highly scattering samples. It was also assumed above that the magnitude of the incident field arriving at the point scatterer does not change between the normally incident and tilted plane wave cases.

Thus, this implicitly accounts for attenuation of the incident field whilst heading toward our layer of interest. Again, this is contrary to a Born-like approximation, which would assume free space propagation between the source and a region of interest within the sample. Next, in terms of the scattered field returning towards the detector from the point scatterer under consideration, it was assumed that the only difference between scenario 1 and scenario 2 is a phase change uniformly applied across the entire scattered wavefront.

This assumption also implicitly allows for the scattered field to be perturbed through shallower regions of the sample when propagating towards the detector. In other words, considering our scatterer as a single point source, the possibly perturbed scattered field at any location will be altered in tandem with the change experienced by the point source.

The validity of these underlying assumptions will be investigated further in the results section, however, we have outlined the reasons why the relationship expressed in Eq. We now continue by considering the final scenario shown in Fig. Here we can consider the incident beam to be a superposition of a spectrum of plane waves having all possible incident directions within the NA of the objective lens. Using the result from Eq. Now that we can synthesize the field arriving at the back focal plane of the lens in the focussed-illumination case, we can also compute how this field would be coupled into an optical fiber, thus describing the sample arm signal of the OCT interferometer system.

The benefit of this approach is that we are only required to run the PSTD simulation once, using plane wave illumination, as opposed to the more rigorous approach which would require a separate PSTD simulation for each scan position, with the interaction of the sample with a focussed illumination beam being explicitly modelled.

We begin with a demonstration of the introduced method towards synthesizing the point spread function PSF of a typical commercial scanning OCT system. Here we model a Thorlabs, Inc. In the PSTD simulation, the plane wave is introduced to the medium as a temporal pulse, and in this particular sample the simulation iterates through discrete time steps to fully propagate the pulsed wavefront through the sample, including its interaction with the refractive index inhomogeneities scatterers , and the resulting scattered field.

A region of each of these scans was then used to populate the final B-scan. This synthesized image is compared to a reference image computed using a more rigorous approach which explicitly models a scanned and focussed beam, and the discrepancy between the reference and synthesized images is shown in Fig.

The reference B-scan required a separate 3D PSTD simulation of a focussed beam propagating in the sample for each of the 47 scan positions, and thus required significantly more computational resources.

A detailed comparison of computational requirements for each approach is left for the discussion section. It appears the error, although small, increases gradually as a function of depth. All images are presented on a linear scale. Error metric Eq. More information on this numerical phantom may be obtained from [ 38 ]. See Fig. To investigate the validity of our field synthesis technique for such scattering samples, we first explore the phase change within the sample due to a change in tilt angle of an incident plane wave.

Recall that the field synthesis technique described in Section 2 makes the assumption that this phase change is identical to that experienced by a the plane wave in the absence of scatterers. The field was computed using the previously mentioned PSTD solver.

We note that for technical reasons related solely to the PSTD method, in these examples we have simulated a weakly focussed incident beam, rather than a pure plane wave.

Figure 4 a shows that the incident weakly focussed beam has been significantly perturbed due to propagation through the sample, resulting in a phase structure consistent with that of a speckle pattern. A phase ramp is visually identifiable despite the speckle-like phase structure. Figure 4 c shows the phase of the synthesized field. This is calculated by applying the phase ramp predicted by the tilted plane wave to the field from Fig.

The two phase images in Fig. To further analyze the agreement between the directly evaluated field b , and the synthesized field c arriving at this layer within the sample, we plot histograms of phase discrepancy over the region of interest for two differing tilt angles. In particular, Fig. We see that for the lesser tilt angle, the phase discrepancy has a standard deviation of 0. Importantly, in both cases the mean of the phase discrepancy is zero.

Figure 6 a and b show the standard deviation of the phase discrepancy, and the magnitude discrepancy, respectively, for a range of incident tilt angles of the source. The discrepancies are observed to increase both as function of incident angle of the weakly focussed source, and as a function of depth within the sample.

The preceding analyses are for monochromatic plane waves. Next we consider how the directly evaluated and synthesized fields within the sample vary as a function of wavelength.

We also observe in Fig. This is important as it is this rate of change of the phase in the arriving fields and thus also the scattered fields which encodes the depth information of the scatterers in Fourier domain OCT. Example of directly evaluated field solid red compared to the synthesized field dotted blue. Also shown in b is the phase difference between directly evaluated and synthesized fields solid black. Using this isolated field, an OCT A-scan was generated and compared between the directly evaluated signal, and the synthesized signal for different tilt angles of a weakly focussed beam.

The A-scans were attained by computing the back-scattered field coupled into a single mode optical fiber, and calculating the interference with a reference signal. It is seen that the directly evaluated and synthesized fields red and blue, respectively show close agreement at the peak of the signal, yet show some disagreement beyond the peak. For the larger angle of incidence of the illuminating weakly focussed beam shown in Fig.

This correction factor simply accounts for the directly evaluated incident field arriving at the point scatterer, rather than the assumed field that is used to generate the synthesized case.

This corrected case dotted black line is seen to be much closer to the directly evaluated case for all regions except at the tail of the signal. As this correction requires knowledge of the exact field arriving in the plane of the scatterer, such a correction could not be applied in practice.

However, what this does show is that if the field arriving at the scatterer is accurately accounted for, then the synthesized OCT signal is very close to the directly evaluated case. This suggests that the journey of the scattered field back through the sample towards the detector does little to further degrade the synthesized signal, and that it is primarily the discrepancy in the arriving field at the scatterer location that is the main downfall of the synthesized approach.

Red lines show the directly evaluated A-scan, blue lines show the A-scan formed from the synthesized field. Black dashed lines show a "corrected" synthesized A-scan using knowledge of the field arriving at the point scatterer.

Error metric summed over all depths for the A-scans shown in Fig. Each line shows the error as a function of the tilt angle of the illuminating wave. Blue - Synthesized field. Finally, we now evaluate the accuracy of a full synthesized OCT B-scan of the scattering object illustrated in Fig. Figure 10 b shows a reference B-scan simulated by direct evaluation of the scanning-mode PSTD-based image formation model. OCT relies on light waves. It cannot be used with conditions that interfere with light passing through the eye.

These conditions include dense cataracts or significant bleeding in the vitreous. About Foundation Museum of the Eye. What Is Optical Coherence Tomography? By David Turbert. Video: Optical Coherence Tomography. What is a Macular Pucker? After this, a linear reduction in the thickness of this band was observed. The reproducibility of OCT measurements of total retinal thickness among the 10 observers was high coefficient of variance, 0.

It was lower coefficient of variance, 0. In most subjects the inner band was absent at the foveola and occupied half the total retinal thickness at the edge of the foveal pit Table 1. At the extreme settings of polarization, the overall retinal thickness and the thickness of the outer band remained constant.

The reproducibility of these measurements was very low, with a coefficient of variance of 0. In scans of patients with RP, although inner and outer bands could be readily identified, the inner band was less intense. Fifty-four areas of bone spicule hyperpigmentation were imaged in the four patients.

All scans passing through these lesions had discrete regions of very high signal coincident with the bone spicules, and deep to the innermost border of the inner band Fig. There was also a discrete reduction in signal intensity of the outer band under such high-signal areas. The patient with the deeper signals was 47 years old with AD RP. Two patients one with US and one with AD RP had a central macular area of normal appearance with a surrounding area of hypopigmented retina.

Scans spanning these areas had three distinctive features coincident with the hypopigmented retina Fig. First, the inner band of the OCT image of the hypopigmented retina had an increased signal. Second, there was a reduced distance between the superficial borders of the inner and outer bands i. Finally, the outer band was much thicker than that of the central macula.

The pale rim of the laser lesions corresponded to an increase in the signal intensity from deeper tissues. The optical coherence tomographic imaging of tissues is dependent on their optical properties. In any illuminated tissue the only structures to generate high signal are those that return light close to or along the axis of the incident light. Tissue components that absorb light or direct it away from the OCT detector have correspondingly low signal intensities.

Thus, the intensity of a recorded signal is related to the proportion of low-coherence light returned reflectance to the detector. The optical phenomena involved in the generation of OCT signals in tissues are likely to be reflection, scatter, and birefringence.

Scattering properties contributed to the band thickness. It was found that the more a medium scattered light, the more discrete were its deep borders, and therefore, the thinner it appeared. This may have been caused either by the loss of coherence with multiple scattering of light, 23 resulting in the detection of low interferometric signal, or by absorption and scattering of light so that less was returned to the OCT detector.

Overall, therefore, the z- axis resolution of OCT within poorly scattering tissues may be limited. In the retina, change of refractive index may be responsible for the inner aspect of the inner band, and scattering could define its outer border. On entering the OCT detector system, light that was returned from the tissue being examined was allowed to interfere with that from the reference arm.

The resultant intensity of this interference was recorded at points along an individual z- axis scan and was represented by a logarithmic pseudocolor scale. The arbitrary units of the signal intensity itself had a range of 0 to , whereas the pseudocolor scale consists of 16 colors. This degradation of signal resolution is in contrast with other imaging systems in which attempts at improving image resolution have involved extending gray scale.

The result of the pseudocolor scale is likely to be a grouping of a wide range of higher signal intensities into single-color bands of red or white. Significant variations in signal intensity within the inner and outer bands of retinal OCT images thus may not be displayed, with a consequent loss of spatial resolution in the z- axis.

When imaging retina, the effect may be to compress a layered system 13 into an image with just three or four bands. Postprocessing of OCT images may also limit resolution. The spatial resolution of OCT images in the x—y plane is dependent on both the optical limitations of the ocular media and the design constraints of the instrument.

These limitations are apparent when attempting to image over a wide x—y field. Each OCT image is constructed from equally spaced individual z- axis scans, the x—y dimensions of each being constant, irrespective of the area of retina scanned.

It is therefore evident that, in all three dimensions, the pixelation of images gives rise to both a mismatch between theoretical and practical resolution and indistinct borders on images of tissues. The appearance of a recorded image is also dependent on the preference of individual operators. Although the focusing of an image is readily and accurately repeatable and the power setting can be standardized, polarization is a variable that cannot be quantified on current scanners, and settings vary among observers.

In this study, by examining raw images, we have shown polarization to have a significant effect on the measurement of some parameters, both among observers and observations by a single operator. The variances of inner and outer band thickness measurements were least forimages that had undergone gaussian smoothing. For measurements of the total retinal thickness, however, there was no significant difference among raw, normalized, median-smoothed, and gaussian-smoothed images.

Given the difficulties in resolving the various components of the OCT image, it is surprising that only one other study has been undertaken to correlate OCT images with histology. Toth et al.

They reported that the outer aspect of the inner band was coincident with the outer aspect of the RNFL. In all the high-signal areas identified by Toth et al. Although the previous study did not state that image postprocessing programs had been used, the images would suggest that they had. Although Toth et al. In particular, the persistence of an inner band after the deliberate destruction of inner retinal layers contradicts the notion of tissue-specific signal.

In our study, no image manipulation was required to generate matches in retinal thickness, because they could be directly correlated with ablated samples.

Direct evidence was determined, locating the inner border of the outer band at the level of the RPE. Bone spicules are formed in RP by the migration and aggregation of pigmented RPE cells surrounding retinal capillaries.

Although their precise location cannot be determined on biomicroscopy for direct comparison with OCT, their abnormal retinal distribution is well established.

In contrast to this situation of high melanin concentration, the RPE cells at the pale rim of a retinal laser lesion differ from RPE cells in normal nearby retina, in that they have little or no melanin within them. The corresponding loss of signal intensity in the outer band therefore demonstrates that it was not RPE cells that were responsible for the generation of signal but their subcellular components such as melanin and lipofuscin. This was confirmed in our in vitro study by the observation that the location of the inner border of the outer band remained constant and close to the RPE, even with progressive ablation.

Another significant optical effect of melanin seen in these studies was an attenuation of signal from deeper tissues. The most notable examples were seen with bone spicules when the signal from deeper tissues was almost absent. The candidate optical phenomena responsible for the duality of high signal generation and downstream signal attenuation are absorption, reflection, and scatter.

The first is the deviation of light away from deeper tissues so that less is incident on them. Second, any light that reaches deeper tissues and is returned toward the detector may be further scattered by the melanin and therefore deviated away from the detector.

The clinical significance of this phenomenon is that diseases affecting tissues deep to the RPE may not be directly demonstrable with OCT. Changes relating to choroidal neovascularization have been described elsewhere and are limited to a description of the thickness and irregularity of the outer band. The influence of tissues on the signal from deeper tissues is not exclusive to melanin and was seen with the progressive ablation of tapetal retina. In this case, the thickness of the outer band increased linearly with progressive ablation of overlying retinal tissue, presumably because of a loss of attenuation from those tissues.

That this was not significant with nontapetal or human retina until after ablation of the RPE may be accounted for by the attenuation effect of their RPE melanin, in contrast with the nonpigmented RPE of tapetal retina.

In contrast with the discreteness of the inner border of the inner band and both borders of the outer band, the deep border of the inner band was more diffuse and less easy to identify.

One explanation of this phenomenon is that there may be less difference in the optical properties of adjacent layers of tissue. Alternatively, this may relate to the presence of significant noise or a trailing edge effect of a high signal, giving a greater apparent thickness to each band. This would also have the effect of masking narrow intervening bands of low signal.

Indeed, the band of low signal corresponding to the GCL seen in Toth et al. This phenomenon may also have accounted for some of the findings of Schuman et al. They found that this band was thickest in the superior quadrant and thinnest nasally. This differs markedly from the findings of Varma et al. This difference may have been caused either by an overestimate of the thickness of the RNFL superiorly or by an underestimate of its thickness temporally in the OCT study.

It is known that the GCL in all quadrants around the disc is one nucleus thick, except in the temporal quadrant, where it is three or more nuclei thick. The inner band temporal to the disc may, however, have approximated to the true RNFL thickness, because it may have been resolved from the high-signal IPL band by virtue of a sufficiently thick intervening low-signal GCL band. It has already been demonstrated that total retinal thickness may be reduced in glaucoma 31 and this may be seen on OCT.

In addition, the innermost band may also change in thickness with a pattern dependent on the differential rate of loss, if any, of RNFL and GCL thickness. If the RNFL is lost faster, a thinning of the innermost band may be seen initially. Schuman et al. In our study, the most significant factor affecting reproducibility was the variation in polarization settings. This could also have been the reason for the findings of Schuman et al.

In the present study, the change in the thickness of the inner band with progressive ablation had two phases. The first was linear, where the thickness of the inner band was reduced by the same amount as the ablation step height.

After this, the inner band maintained an almost constant thickness and migrated downward into the retina until it became indistinguishable from the outer band. The first phase was consistent with direct relation of OCT signal to specific tissues, because removal of tissue resulted in removal of high signal. The second phase, however, was not consistent with a tissue-specific origin of OCT signal. It may be explained by the abrupt change in refractive index and scattering properties at the primary optical interface of the tissue in the path of the OCT beam.



0コメント

  • 1000 / 1000