CIRP Annals - Manufacturing Technology 69 (2020) 493 496
Contents lists available at ScienceDirect
CIRP Annals - Manufacturing Technology journal homepage: https://www.editorialmanager.com/CIRP/default.aspx
Extended sub-surface imaging in industrial OCT using ‘non-diffracting’ Bessel beams Haydn Martin*, Prashant Kumar, Andrew Henning, Xiangqian Jiang (1) Future Metrology Hub, Centre of Precision Technologies, University of Huddersfield, UK
A R T I C L E
I N F O
Article history: Available online 17 May 2020 Keywords: Measuring instrument Optical Optical coherence tomography
A B S T R A C T
Optical coherence tomography (OCT) is an imaging technique which can provide sub-surface evaluation of defects in optically compliant components such as those manufactured by polymeric selective laser sintering. In OCT systems, achieving lateral imaging resolutions of <10 mm means that full-depth imaging requires multiple scans due to the limited depth of focus (DOF). We present a study on the application of ‘non-diffracting’ Bessel beams to extend system DOF and enable deeper imaging with a single scan. Such capability expands the potential for OCT as a rapid tool for sub-surface assessment, either in-line or in-process, by greatly reducing acquisition times. © 2020 The Authors. Published by Elsevier Ltd on behalf of CIRP. This is an open access article under the CC BY license. (http://creativecommons.org/licenses/by/4.0/)
1. Introduction Optical coherence tomography (OCT) is a non-invasive imaging technique which utilises broadband (low coherence) light to capture sub-surface information from within scattering media. Material and structural changes create refractive index variation throughout the media which lead to the scattering of light. Interferometric analysis of scattered light returning to the system allows the axial position (depth) of a given scattering point to be determined. 3D volumetric image of sub-surface structures can be built up by lateral scanning of either the sample or more commonly the imaging optics. Since OCT was first described in the early 1990s [1] substantial effort has been made to develop the technique into a tool for biological imaging and healthcare diagnostics. This has been made possible by the introduction of techniques to increase the acquisition rates such as rapidly scanning optics and light sources, high speed electronics and frequency domain analysis techniques which have enabled the demonstration of volumetric imaging at video rates. Although the primary driver for the development of OCT technology has been biomedical imaging applications there has also been continuous interest in the measurement of non-biological samples. This is especially true in areas where other commonly applied nondestructive evaluation techniques exhibit limitations. OCT axial (depth) and lateral resolutions can approach 1 mm and <10 mm respectively in ideal conditions, which is an order of magnitude improvement over high-frequency ultrasound imaging techniques, though this is generally obtained at the expense of a corresponding reduction in penetration depth. X-ray computed tomography (XCT) is becoming more prevalent but it is slow, expensive and lacks the
* Corresponding author. E-mail address: h.p.martin@hud.ac.uk (H. Martin).
sensitivity required to discriminate between materials in some circumstances e.g. polymer fibres and resins in some composites. Duncan et al. demonstrated the use of OCT to locate internal defects in lead zirconate titanate ceramic, single crystal silicon carbide and teflon coated wire [2]. Dunkers et al. used OCT to analyse fibre architecture, voids, cracking and debonding in glass reinforced polymer composites [3]. Wiesauer et al. introduced polarisation sensitive OCT to extend defect detection to include strain fields in polymer composites [4]. Yao employed OCT for the metrology of multilayer polymer films stacks which informed improvements to layer thickness consistency in the manufacturing process [5]. A 2007 review of non-medical applications for OCT by Stifter [6] outlines other manufacturing relevant studies of polymer foams, paper, injection moulded parts and microfluidics. More recently, OCT has been applied to defect detection in additive manufacturing (AM). Guan et al. showed the detection of sub-surface defects as well as the ability to discriminate between un-sintered and sintered powder in selective laser sintering (SLS) of monolithic polymer parts [7]. In microelectronics, OCT has been used to evaluate PCB protective coating thickness, and assess through-silicon vias which critical for the development of 3D integration in semiconductor manufacture [8]. Currently, in-situ and in-process measurement applications are being reported in laser materials processing for seam tracking and weld depth evaluation and laser materials processing [9]. A key requirement for imaging depth in OCT is that the interrogated material is not strongly absorbing in the wavelength range of illuminating light. In general, longer wavelengths will penetrate better at the expense of the achieved lateral imaging resolution. For this reason commercial OCT systems are often designed to operate at wavelengths of 800 900 nm, 1300 nm systems are used in more scattering media e.g. biological tissue, while for the imaging of ceramics 2 4 mm is better suited [10].
https://doi.org/10.1016/j.cirp.2020.04.017 0007-8506/© 2020 The Authors. Published by Elsevier Ltd on behalf of CIRP. This is an open access article under the CC BY license. (http://creativecommons.org/licenses/by/4.0/)
494
H. Martin et al. / CIRP Annals - Manufacturing Technology 69 (2020) 493 496
While wavelength selection is crucial for imaging performance, there is another fundamental limitation in OCT systems which is associated with the restricted depth of focus (DOF) achieved in all optical system utilising conventional imaging objectives. Lateral resolution is inversely proportional to the numerical aperture (NA) of the imaging system, while DOF is inversely proportional its square. As such DOF very quickly becomes a limiting factor for lateral resolutions below <10 mm. This restricts the OCT systems to low NAs which in turn means small scale defects and features cannot be resolved effectively. To overcome this issue new approaches to obtain higher spatial resolution over longer depth of focus are required. Techniques involving the scanning of the sample axially or dynamic focussing optics may overcome this problem, however such methods inevitably reduce the acquisition rate and negate many of the beneficial attributes of OCT for industrial measurement i.e. simplicity and speed, while also limiting the potential for on-machine measurement applications which are becoming increasing important and prevalent in manufacturing [11,12]. In this paper we report on the potential for non-diffracting beams to be applied to industrial OCT measurements in place of the Gaussian beam generated by conventional optics in order to extend the DOF while maintaining high lateral imaging resolution. A comparative analysis between OCT imaging with Bessel and Gaussian beam illumination using custom created optical artefacts is described. We quantify both the range and lateral imaging resolution of the system in both cases. Finally, we show initial results from the assessment of a monolithic polymer AM component produced using stereolithography (SLA).
2. Theory Durnin first proposed an idealised non-diffracting Bessel beam in 1987 [13] and experimentally verified their existence shortly after. The term ‘non-diffracting’ relates to the fact that the beam radius is propagation invariant in the ideal case. The term Bessel beam derives from the radial intensity profile which manifests as a Bessel function of the first kind. While the true non-diffracting characteristic can only be achieved in the idealised case of a beam with infinite extent, practically generated beams can exhibit a core which maintains its radius over a much greater propagation distance than the equivalent Gaussian beam waist [14]. Bessel beams can be generated in a number of ways, the most optically efficient method is to use an axicon (conical lens) [15]. When an axicon is illuminated by a collimated beam of diameter, D the intensity distribution in the region in front of it is given by, Iðr; zÞ ¼ E2 ðRz ÞRz
2pk sinðbÞ 2 j ðkr sinðbÞÞ; cos2 ðbÞ 0
ð1Þ
Here r and z are the radial and axial coordinates in the region beyond the axicon, and the optical axis is aligned with z. The origin of the coordinate system corresponds to the apex of the axicon. b is the angle that a ray propagating parallel to the optical axis would make with the optical axis after refraction through the axicon (see Fig. 1a), k is the wavenumber of the light and j0 is a zeroth order Bessel function of the first kind. The value of I(r, z) is dependent on the energy of
Fig. 1. Focussing of a Gaussian beam by an axicons and a conventional spherical lens. (a) Bessel beam generated with an axicons lens; (b) Gaussian beam generated using a convention spherical lens.
the incident beam E2(Rz) at a radius Rz in the space before the axicon where, Rz ¼
z tanðbÞ 1 tanðaÞ tanðbÞ
ð2Þ
Here, a is the angle of the surface of the axicon shown in Fig. 1. With the axicon (and illuminating beam) only having a finite radius, the Bessel beam will only be generated for a finite length beyond the apex of the lens (see Fig. 1), this is the depth of focus of the axicon lens and is given approximately by, DOFB
Dð cotðbÞ tanðaÞÞ 2
ð3Þ
D, the diameter of the incident beam, is less than or equal to the lens diameter. From Eq. 1, it can be seen that, for any value of z, the central peak radius corresponds to the value r0 where the first zero in the Bessel function occurs, jo kr0 sinðbÞ ¼ 0 ð5Þ In terms of b and λ (the central wavelength of the incident light) roAxicon
2:4048λ 2p sinðbÞ
ð6Þ
A reduction in the radius of the central intensity peak can be achieved by reducing the central wavelength or increasing b either by changing at axicon angle or the refractive index of the material it is made from. In a conventional OCT system, the objective lens focusses a collimated Gaussian beam. This alters the beam parameters e.g. waist radius and focal position, but the beam remains Gaussian in nature as shown in Fig. 1(b). As such, the systems DOF is defined by twice the Rayleigh range. For a collimated beam of wavelength, λ, and beam diameter, D, incident on a lens of a focal length, f, the DOF is given by, 2 8λ f DOFG ¼ / 1=NA2 ð7Þ p D with the beam waist radius at the focus being, 4λ f / 1=NA roG ¼ p D
ð8Þ
It is clear from Eqs. (7) and (8) that the system resolution, as defined by the beam waist radius, is improved by using a higher numerical aperture (NA) but this rapidly reduces the DOF. 3. Materials and Methods Fig. 2 shows the OCT apparatus utilised for this study which is illuminated with a super-luminescent diode (SLD) centred at 830 nm with a bandwidth of 25 nm (full-width half-maximum). A fibre collimator (FC) generates a collimated Gaussian beam having a 1/e2
Fig. 2. Schematic representation of the OCT apparatus.
H. Martin et al. / CIRP Annals - Manufacturing Technology 69 (2020) 493 496
defined beam width of »3.9 mm, sized to fill the back aperture of objective lenses used. The collimated beam is incident on a beam splitter, dividing the light into a reference and measurement beam. The reference arm contains a neutral density filter (ND) in order to more closely match the intensity of the light returning from the measurement arm. The light in the measurement arm is incident on either a doublet (DL) or an axicon lens (AL) generating either a Gaussian beam or Bessel beam respectively. For this comparative study, we have chosen a conventional doublet lens and an axicon which produce a similar beam waist radius and core diameter respectively. The axicon (Asphericon X25-200FPX-B-U) and doublet lens (Thorlabs AC080-16-B-ML) used both yield equivalent lateral imaging resolution at focus. Inserting the relevant parameters (D ¼ 3:9 mm, a ¼ 20B λ ¼ 830 nm) into Eqs. (3) and (6) implies the Bessel beam will have a core diameter of 3.734 mm and DOF of 10.61 mm in air. For the conventional OCT configuration with a doublet lens objective, applying the parameters (D ¼ 3:9 mm, λ ¼ 830 nm; f ¼ 16 mm) to Eqs. (7) and (8) indicates that a beam diameter of 4.32 mm and DOF of 35.39 mm. Samples were mounted to allow manual translation axially (along the z-axis, see Fig. 2) using a micrometer stage. A precision PZT actuated translation stage moved the samples during the measurement process to create B-scans (transverse depth profiles along the x-axis). The back scattered light from the sample was collected and interfered with the light from the reference arm to produce spectral interferograms recorded via a spectrometer (Solar Laser System S150) incorporating a 3648 line CCD array with a spectral resolution of »0.06 nm over the working range of 798 860 nm. The process to generate an A-scan (a single full-depth scan) from each spectral interferogram is well established in the literature and for reasons of brevity is not described here [16]. The resolution of an optical system is primarily determined by the point spread function (PSF) which may be determined by the imaging response of the system to a point scatterer. Practically, a good approximation to this result can be obtained by measuring small but finite-sized scatterers which give a result that is close to the impulse response of the system [17]. In order to assess the resolving power of the two OCT systems, an optical phantom containing randomly distributed nanoparticles embedded in an epoxy matrix was constructed as per the method described in [18]. Such a sample provides a stable and reliable method for characterising imaging performance. The phantom in this study was made from a commercially available two-part epoxy (EL2 Laminating + AT30 Epoxy Hardener Fast) resin doped with spheroidal iron oxide particles (Polysciences, Inc., USA, nominal diameter 400 nm within a quoted tolerance range of 300 800 nm), Iron oxide is highly reflective to near infrared light, and nanoparticles act effectively as point scatters. The nanoparticles
495
were evenly distributed in the resin by ultrasonic mixing after the liquid had been degassed to remove any air bubbles. The concentration of particles within the matrix was kept low enough to avoid both particle clumping as well as the generation of speckle which would have a deleterious impact on imaging performance.
5. Results The optical phantom was first imaged using the OCT system utilising a doublet lens, with Fig. 3 showing the evolution of the response of a single scatterer from the point of optimal focus through to 90 mm of de-focus. This was obtained by moving the phantom axially away from the objective, so the system focus moves towards the top surface. There is a substantial blurring of the imaged scatterer as the system is defocused, this is entirely anticipated given the »35 mm DOF calculated for the system using a doublet, and illustrates the challenge posed by the limited DOF in high-resolution OCT systems. The intensity response from the scatterers along the transverse axis were found to be Gaussian in nature. At focus, the FWHM of the intensity response of the scatterer within the dielectric medium was found to be 3.8 mm which correlates well anticipated system resolution produced by a beam waist calculated to be 4.32 mm in air. Next the OCT imaging performance was analysed using the axicon as the objective lens, thereby generating a Bessel beam. Fig. 4 shows a set of cropped B-scans where the optical phantom was moved away (axially) from the axicon objective over a total distance of 2 mm. It is observed that scatters appear to be well resolved with no noticeable blurring throughout the full extent of the translation.
Fig. 4. Cropped B-scans of the optical phantom using the axicon lens identifying a single scatterer (located just above the sloped line). The sample position is moved axially away from objective in steps of »250 mm from left to right, representing a total translation for 2 mm. The top surface appears as a horizontal line in each scan. Several other scatterers are also apparent deeper into the sample.
Fig. 3. Clockwise from upper-left: B-scans using the doublet lens showing an imaged scatterer (circled) in the optical phantom at four axial (depth) positions: best focus (0 mm), 30 mm, 60 mm and 90 mm. The recorded FWHM of the intensity responses from the scatterer are 3.8 mm, 6.7 mm, 9.5 mm and 14.8 mm respectively. The top surface of the phantom is apparent as a horizontal line.
496
H. Martin et al. / CIRP Annals - Manufacturing Technology 69 (2020) 493 496
noise. This result demonstrates the potential for extended DOF imaging using Bessel beams in industrial OCT measurements. 5. Conclusions and further work
Fig. 5. Overlaid transverse intensity profiles of the scatterer identified in Fig. y at each captured axial sample position. The intensity data shown is obtained directly from the acquired B-scan data and then normalised to allow better comparison of the transverse characteristics.
In this work the potential for using Bessel beams, generated with an axicon lens, to increase DOF in OCT for industrial measurements has been established. The ‘non-diffracting’ nature of Bessel beams enables an extension of the effective DOF far beyond that possible using a conventional objective lens which generates a Gaussian beam. An initial measurement of an AM sample shows much promise for the application of Bessel beam OCT systems to industrial measurements. However several aspects still require further investigation: (1) Understanding of the nature of the scattering generated by Bessel beam illumination needs development through numerical modelling; (2) Improved performance in terms of lateral resolution needs to be quantified through a detailed study of an artefact containing suitable features and/or defects; (3) Evaluation of Bessel beam OCT in a range of measurement scenarios, including some of those outlined in the introduction is necessary to demonstrate its benefits, especially for on-line measurement scenarios. Acknowledgements The authors gratefully acknowledge the UK’s Engineering and Physical Sciences Research Council (EPSRC) funding of Future Advanced Metrology Hub (EP/P006930/1) and the Royal Academy of Engineering Chairship (RCSRF1516_2_7). We would also like to acknowledge the help of Marcin Oraweic who aided in the preparation of the optical phantom. References
Fig. 6. (a) AM artefact with sub-surface channels below the upper facet and XCT image of the channels with scale bar (inset). B-scans of two channels using: (b) doublet lens objective with the focal point at the artefact top surface; (c) axicon with artefact top surface positioned 5 mm from the apex; (d) axicon with the artefact top surface at 1 mm from the apex, representing a 4 mm axial (depth) translation.
Note, we do not use the term ‘de-focus’ here because the core of the Bessel beam which delivers energy is, to some approximation, ‘nondiffracting’, thus the core radius is maintained throughout the analysed imaging depth of 3 mm. Fig. 5 shows the analysed transverse intensity profiles of the scatterer identified in Fig. 4 after normalisation. The measured FWHM of the intensity signature remains consistent at »2.2 mm throughout the full 2 mm translation. The preceding results have identified that an axicon based OCT system can resolve point scatterers consistently over an axial (depth) range much larger than is possible using a conventional objective lens generating a Gaussian beam. In order to establish the potential of Bessel beam OCT imaging in realistic measurement scenarios, a preliminary study was carried out using an additively manufactured artefact created from a translucent resin using stereolithography (SLA). Fig. 6(a) shows the SLM artefact analysed which has hollow channels having »100 mm depth located at 200 mm intervals below the surface. OCT can resolve the material-air boundary of the channels by detecting the backscattered light resulting from the refractive index change at these locations. The image obtained using the conventional OCT setup (doublet) shows some response to the boundaries of each of two buried channels, however these are not well resolved; the upper channel in particular is not clear. The OCT images obtained using an axicon show a much sharper definition of the channel boundaries for both channels. Crucially, this resolving ability is maintained even when the sample position translated through 4 mm in the axial (depth) direction, albeit with some increase in background
[1] Huang D, et al (1991) Optical Coherence Tomography. Science 254(5035):1178– 1181. [2] Duncan MD, et al (1998) Subsurface Defect Detection in Materials Using Optical Coherence Tomography. Optics Express 2(13):540–545. [3] Dunkers JP, et al (2001) The Application of Optical Coherence Tomography to Problems in Polymer Matrix Composites. Optics and Lasers in Engineering 35(3):135–147. [4] Wiesauer K, et al (2007) Investigation of Glass-Fibre Reinforced Polymers By Polarisation-Sensitive, Ultra-High Resolution Optical Coherence Tomography: Internal Structures, Defects and Stress. Composites Science and Technology 67(1516):3051–3058. [5] Yao J, et al (2013) Nondestructive Metrology by Optical Coherence Tomography Empowering Manufacturing Iterations of Layered Polymeric Optical Materials. Optical Engineering 52(11):112111. [6] Stifter D, et al (2007) Beyond Biomedicine: A Review of Alternative Applications and Developments for Optical Coherence Tomography. Applied Physics B. 88 (3):337–357. [7] Guan G, et al (2015) Evaluation of Selective Laser Sintering Processes by Optical Coherence Tomography. Materials & Design 88:837–846. [8] Iff WA, et al (2019) Electromagnetic Analysis for Optical Coherence Tomography Based Through Silicon Vias Metrology. Applied Optics 58(27):7472–7488. €dt C, et al (2019) Novel Approach for Weld Depth Determination Using [9] Mittelsta Optical Coherence Tomography Measurement in Laser Deep Penetration Welding of Aluminium and Steel. Journal of Laser Applications 31(2):022007. [10] Gao W, Haitjema H, Fang FZ, Leach RK, Cheung CF, Savio E, Linares JM (2019) ''OnMachine and In-Process Surface Metrology for Precision Manufacturing. CIRP Annals-Manufacturing Technology 68(2):843–866. [11] Jiang X, Whitehouse D (2006) Miniaturized Optical Measurement Methods for Surface Nanometrology. CIRP Annals-Manufacturing Technology 55(1):577–580. [12] Su R, et al (2014) Perspectives of Mid-Infrared Optical Coherence Tomography for Inspection and Micrometrology of Industrial Ceramics. Optics Express 22 (13):15804–15819. [13] Durnin J, et al (1987) Exact Solutions for Nondiffracting Beams. I. The Scalar Theory. Journal of the Optical Society of America A: Optics and Image Science, and Vision 4(4):651–654. [14] Durnin J, et al (1988) Comparison of Bessel and Gaussian beams. Optics Letters 13 (2):79–80. [15] Ding Z, et al (2002) High-Resolution Optical Coherence Tomography Over a Large Depth Range With an Axicon Lens. Optics Letters 27(4):243–245. [16] Yaqoob Z, et al (2005) Spectral Domain Optical Coherence Tomography: A Better OCT Imaging Strategy. BioTechniques 39(6S):S6–13. [17] Woolliams, et al (2011) The Modulation Transfer Function Of an Optical Coherence Tomography Imaging System in Turbid Media. Physics in Medicine & Biology 56(9):2855–2871. [18] Woolliams, et al (2010) Spatially Deconvolved Optical Coherence Tomography. Applied Optics 49(11):2014–2021.
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 33999
Improved description of the signal formation in grating generated-optical coherence tomography DAWEI TANG , 1 A. J. H ENNING , 1,* F ENG G AO, 1 AND X IANGQIAN J IANG 1 1 EPSRC
A. P. G RIBBLE , 2
future metrology hub, University of Huddersfield, Queensgate, Huddersfield, HD1 3DH, UK PLC, New Mills, Wotton-under-Edge, GL12 8JR, UK
2 Renishaw
* a.henning@hud.ac.uk
Abstract: Several optical instruments have been presented over recent years that use a tilted diffraction grating in the reference arm, describing its effect as introducing a continuous optical delay in the direction of the grating dispersion. Having examined these types of system carefully, we believe this description does not fully capture the manner by which the signal is formed in these instruments, and present here a simple model that clearly demonstrates the essential elements behind the way such systems operate. This analysis not only illustrates the reason for signals of the form seen being generated, but also shows how great care needs to be taken with systems such as these in the interpretation of the results. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
1.
Introduction
Optical coherence tomography is a widely used low coherence interferometric technique [1,2] allowing for the measurement within a scattering medium, making it highly useful for a variety of applications including imaging of the eye [3,4], endoscopy [5], blood flow measurements [6], dermatology [7], the characterisation of hydrogels [8] and many others. There are many variations on the experimental setup and analysis methods, but the standard form of the instrument is a variation of a Michelson interferometer combined with a broadband light source. In the following we examine the use of a tilted diffraction grating in the reference arm of such a system. The effect of such an element was described in by Zeylikovich et al [9] as introducing a continuous optical delay in the direction of the grating dispersion, following on from work by some of the same authors involving ultra short pulses [10]. OCT systems utilizing a tilted diffraction grating in the reference arm were subsequently used in [11–14]. Having constructed a system of this form and analysed the signal formation, we do not believe that this description fully captures the effect of the grating, and so present a more detailed analysis of these type of systems. Being not much more complex than a Michelson interferometer, these systems are amenable to quite a simple analysis, with the wavefront taking a simple form throughout, however the results are quite illuminating. In order to highlight the fundamentals behind the signal generation we have ignored such effects as chromatic abberations in the optical elements, imperfect collimation of the beams and misalignment of the elements. While this leads to poorer correspondence between the experimental results and the predicted results we believe that it will provide the reader with a clearer understanding of the way the system works. The structure of the paper is as follows; in section 2 a model of the system is constructed demonstrating that the signal generated on the detector is highly dependent on the angle at which the incident light is diffracted at in the reference arm. Several numerical approximations are then made to provide a simplified expression for the signal. In section 3 the suitability of these #374891 Journal © 2019
https://doi.org/10.1364/OE.27.033999 Received 7 Aug 2019; revised 8 Oct 2019; accepted 10 Oct 2019; published 6 Nov 2019
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34000
approximations for the systems found in [11–14] are verified by comparing numerical evaluations of the equations in section 2 before and after the approximations are made. In section 4 the results from the model and experimental results are compared demonstrating that the envelope on the signal moves in the same manner in both cases. 2.
Signal generation with broadband sources
The OCT systems used in [11–14] are of the form illustrated in Fig. 1, albeit in the references there is cylindrical lens in the measurement arm, and in [13,14] the lens in front of the detector is replaced by an optical zoom lens. In all these systems a tilted diffraction grating is used in the reference arm to return the incident light, a method taken from [9] where it is described as having the effect of introducing a continuous optical delay in the direction of the grating dispersion. We do not feel this description accurately captures the signal formation, especially in the cases considered in [11–14] where broadband light sources are used, and so in the following we develop a more complete description of these systems. We will demonstrate that it is the diffraction of the incident light at a wavelength dependent angle, combined with the use of a lens in front of the detector, that leads to the signal that is generated. Being only slightly more complex than a Michelson interferometer, this system is open to a particularly simple analysis which clearly illustrates the essential elements of the signal formation.
Fig. 1. A schematic of the instrument showing the path light takes when propagating (a) along the reference arm (b) along the measurement arm. The two focal points on the focal plane of the lens can be considered as sources of spherical wavefronts incident on the screen.
Figure 1(a) highlights the path the light takes along in the reference arm, while Fig. 1(b) highlights the path along the measurement arm. For each individual wavelength, we will consider the intensity seen on the detector to be the coherent superposition of the electric field due to the light propagating in each arm. The total signal due to the broadband light source is then the incoherent superposition of the intensities obtained at each individual wavelength. Looking at Fig. 1(a), an interesting comparison can immediately be made with a Czerny-Turner spectrometer [15]. Such a spectrometer would be created if the distance d4 were set to zero, with the source spectrum then being separated out into its different wavelength components across the detector. The instruments in [11–14] use broadband light sources with a −3 dB spectral width of around 25 to 30 nm. The light is collimated before being divided into a reference beam and a measurement beam as illustrated. The reference beam is incident on a reflective diffraction grating before being incident on a lens and then a detector that is placed a significant distance beyond the focal plane of the lens.
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34001
In the collimated beam, the wavefront can be approximated at any point by a plane wave with the electric field being given by, E{L} = E0 exp(ikL) exp(−iωt) + c.c,
(1)
here L is the distance along a ray in the direction the beam is propagating, E0 is the magnitude of the electric field (we ignore both the reduction in magnitude that occurs at the beamsplitter, and the reduction due to only one diffraction order being collected, as it is the phase of the light that we are most interested in. Different amounts of light from the measurement and reference arm will just lead to a constant background being seen below the interference fringes), k = 2π/λ is the wavenumber, ω is the angular frequency, t is time, and ‘c.c’ stands for complex conjugate. The exp(−iωt) component will be omitted in the following, as is common practice, but it should be understood that this applies to all of the electric fields. The details of the scattering by the diffraction grating can be ignored, suffice to say that the incident light takes the form of a collimated beam, as does the scattered light, however the direction of its wavevector has been changed in accordance with the grating equation. D[sin(θ i ) + sin(θ m )] = mλ
(2)
here D is the period of the grating, m is the diffraction order which in the following will be equal to 1, θ i is the angle that the wavevector of the incident light makes with the normal to the grating, and θ m is the angle that the wavevector of the diffracted light makes with the grating, as is illustrated in the inset in Fig. 2.
Fig. 2. The length of the dot dashed line (red) shown is used to calculate the phase of the wave at the point on the focal plane that the light is focused to. The shape of the wavefront about this line is also shown by the lines crossing this ray (blue). This is subsequently used to calculate the electric field on the detector plane.
The grating is set in the Littrow configuration for a wavelength within the spectrum of the source light, λL . At this wavelength, θ i = θ m = θ L , which is also the angle that the normal of the grating makes with the optical axis of the system (defined by the lens in front of the detector). For other wavelengths, while θ i = θ L , θ i , θ m , with the reflected light propagating at an angle θ = θ m − θ L to the optical axis of the system, (2λ − λL )m θ = θ m − θ L = asin − θL (3) 2D The lens in front of the detector will be treated as a perfect lens, converting the flat wavefront of the diffracted beam into a portion of a spherical wave, and the sine condition will be assumed to be satisfied, meaning that light propagating at an angle θ to the optical axis is focused to a point a distance h{λ} = f1 sin(θ) away from the optical axis where f1 is the focal length of the lens, as
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34002
illustrated in Fig. 2. The focal point is assumed to lie in the (x, z) plane, as the wavevector of the scattered light after the diffraction grating also lies in this plane. Just beyond the lens the electric field is viewed as a portion of a spherical wavefront converging towards a point on the focal plane, and after the focal plane as a portion of a spherical wave diverging from the focal point until it is incident on the detector. Thus the electric field on the detector can be viewed as the superposition of two spherical waves, the relative phase of which on the focal plane corresponds to the difference in phase between light propagating via the measurement arm and reference arm. Thus, by tracing a ray perpendicular to the wavefronts along each path up to the focal points after the lens, the relative phase of the light between the two spherical waves at these points is obtained, allowing the intensity recorded on the detector to be calculated. We include the justification for describing the effect of the grating in the way we do in Appendix A, though for brevity do not include it here. In the case illustrated in Fig. 1(b), where the scattering object takes the form of a flat mirror whose normal is parallel to the wavevector of the incident light, the scattered light takes the form of a collimated beam propagating parallel to the optical axis, and thus is focused to a point that lies on the optical axis a distance of f1 from the lens. The light from the reference arm is again focused to a point a distance f1 from the lens but a distance h away from the optical axis. The interference pattern is therefore formed by the interference of two spherical waves whose centres located on the focal plane of the lens and whose centres are separated by a distance h{λ}. The intensity measured on the detector from these two spherical waves is, I{λ} = |Eref {λ} + Emeas {λ}| 2
(4)
where Eref {λ, x, y} =
E0 {λ} (x − h)2 + y2 + z20
and Emeas {λ, x, y} =
q exp i k (x − h)2 + y2 + z20 + φref {λ}
q E0 {λ} 2 + y2 + z2 + φ exp i k x {λ} meas 0 x2 + y2 + z20
(5)
(6)
here z0 is the distance from the focal plane to the detector, and φref {λ} = kLref , φmeas {λ} = kLmeas are the phases of the wave after propagating down the reference and measurement arm respectively. It can be seen from Fig. 2 that the length of the path from the light source to the focal plane via the reference arm is q Lref = d1 + d2 + (d2 + d3 ) /cos(θ) + f12 + (a − h)2 (7) where a = (d2 + d3 ) tan θ − h, and via the measurement arm is Lmeas = d1 + 2m1 + d3 + f1
(8)
The formation of interference fringes from two spherical waves in this manner is very similar to Young’s interference experiment when using point sources instead of slits. The mathematics of this is laid out quite clearly in chapter 7 of [16] and after applying certain approximations that are detailed in appendix B, most significantly that the distance to the detector z0 is much larger than x, y and h, the intensity on the detector can be approximated by, (x − h/2)hk I{x} = 2A{λ}2 1 + cos + φmeas − φref (9) z0 Here A{λ} = E0 {λ}/(x2 + y2 + z20 ). It can be seen from Eq. (9) that under these approximations the signal on the detector for each wavelength takes the form of a sinusiod in the x direction and
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34003
the only variation in the signal in the y direction is due to the slight variation of A{λ}. As both h and k are wavelength dependent, so is the period of the signal generated. Changing φref and φmeas does not change the period of the sinusiod, however it does change its phase. From Eq. (9) it can be seen that the relationship between the wavelength of the signal on the detector, λdetector , and the wavelength of the light creating it is z0 λdetector = λ (10) h The total signal when a broadband light source is used will be the incoherent summation of the sinusiod produced at each wavelength. If (φmeas − φref ) modulo 2π = 0, then the two sources have the same phase, and it can be seen that a maxima will occur on the screen at the midpoint between the two sources, with the path to this point from each source being equal, as illustrated in Fig. 3. From Eq. (9) it can be seen that at the point x = 0 the phase of each sinusiod is given by h2 k/(2z0 ), and that when (φmeas − φref ) modulo 2π , 0 then the phase in the cosine term is modified by this additional amount. If the distance m1 is changed by an amount ∆m then φmeas is changed by an amount 2k∆m for each wavelength. In the next section we will examine numerically the form of the signal that is generated, and the accuracy of the solution once the approximations have been applied, showing that there is a strong peak in the signal that shifts as the location of the object is changed.
Fig. 3. This figure illustrates that, when there is no phase difference between the sources, the central point between the sources where a maxima will lie is wavelength dependent due to the dependence of the location of the second source (the blue or red dot), while the first source remains static (black dot)
3.
Numerical results
In this section we use parameters similar to the systems used in [11–14], and compare the intensity on the detector when the electric fields are given by Eqs. (5) and (6) and the intensity given by Eq. (9) where further approximations have been used. The strong correspondence between the two models implies that sufficient accuracy remains in this regime even after the approximations have been made. The intensity will be calculated at an equally spaced set of points corresponding to a set of pixels on a detector, with the results being calculated for a set of equally spaced wavelengths which are then summed incoherently. The result using the complete form of the electric field as given by Eqs. (5) and (6) will be given by, n ∗ 1 Õ Icomplete {x, y} = Eref {x, yλj } + Emeas {x, yλj } Eref {x, yλj } + Emeas {x, yλj } . (11) n + 1 j=0 where the ∗ denotes the complex conjugate, and λj = λ0 + j∆λ, ∆λ = 0.1 nm, and n = 1000. For simplicity it is assumed that at the points on the focal plane where the spherical waves are centred
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34004
|Eref | = |Emeas |. While this is unlikely to generally be the case, as the incident light is diffracted into many different orders by the grating, an imbalance between the measurement and reference beam will just lead to the visibility of the fringes being reduced. The incoherent summation of the approximate results at each wavelength, as given by Eq. (9), is given by Iapprox
n (x − h/2)hk 1 Õ 2 2A{λ} 1 + cos = + φmeas − φref n + 1 j=0 z0
(12)
and in the calculations the following values were used, d1 = 100 mm, d2 = 154 mm, d3 = 20 mm, d4 = 100 mm, f1 = 100 mm, D = 1 × 10−3 /600 m, and θ L = asin(625 × 109 /(2D)) so that it is in the Littrow configuration when λ = 625 nm and m = 1. The length of the measurement arm is m1 = 154 + ∆m mm, where the shift of the mirror in the reference arm away from the initial position is given by the value of ∆m, These parameters are chosen to be similar to those in Refs. [11–14] and which match those used in the experimental setup that is used in section 4. The intensity of the light source will be given a Gaussian form, −(λ − 625 × 109 )2 S{λ} = exp , (13) 2(5 × 10−9 )2 and this p light is assumed to be split equally between the measurement and reference arms, with E0 = S{λ}/2. The measured intensity is calculated at a set of 1001 × 1001 equally spaced points over the range over the range −5 mm < x, y < 5 mm, with each point representing a pixel on a detector. Figure 4(a) shows the results using Eq. (11) when ∆m = 0 m, while Fig. 4(b) shows the result when ∆m = 5.001 × 10−4 m, and the location of the fringes with high visibility can be seen to be highly dependent upon location of the surface in the measurement arm. The shift of 5.001 × 10−4 m was used instead of 5 × 10−4 m to highlight the fact that the form of the signal within the envelope changes, with the central peak oscillating rapidly as ∆m is changed. Using a shift of 5 × 10−4 m the signal appears to take the same form as that when ∆m = 0, only shifted along the x axis. Figure 4(c) shows a plot of the intensities at point along the y = 0 axis when ∆m = 0, calculated using Eq. 12 (red line), and Eq. 11, (black dotted line), while Fig. 4(d) shows the same results when ∆m = 5.001 × 10−4 . Figures 4(e) and 4(f) shows the difference between the two lines in figs. 4(c) and 4(d) respectively. It can be seen that the difference in the signal only reaches at most about 1 percent of the peak to valley difference in the region around the centre of the envelope of the fringes.
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34005
Fig. 4. Part (a) shows the intensity of the signal on the detector calculated numerically when ∆m = 0 m, while part (b) shows the result when ∆m = 0.5001 × 10−3 m. Parts (c) and (d) show the results along the line of pixels y = 600 for parts (a) and (b) respectively, shown by the black dotted line, while the solid red line shows the results when the approximations have been made. Parts (e) and (f) show the difference between the two results in parts (c) and (d) respectively.
4.
Experimental verification
In this section we will look at a comparison between a set of experimental results generated by an instrument of the form shown in 1 and those predicted by Eq. (12). While the interference peak that is obtained in the experimental results is far sharper than that predicted by this model, the movement of the peak does match that predicted to a reasonable degree of accuracy. The difference in the exact form of the signal is only to be expected, as we have used a very stripped down model to represent the system; for instance the source in the model is an ideal point source which is subsequently perfectly collimated, while in the experiment the source is the end of a fibre whose core diameter is 0.4 mm, and which will be imperfectly collimated across the spectrum by the real collimator. Effects such as these may wash out some of the interference fringes. The experimental setup consists of light from an LED light source (Thorlabs, M625F2), being coupled into a multimode fibre (Thorlabs M48L01, core diameter 400 µm) before being collimated by a triplet collimator (Thorlabs TC25FC-633) and split using a non-polarising 50:50
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34006
beamsplitter. The grating in the reference arm is a ruled reflective diffraction grating with 600 lines/mm (Thorlabs GR25-0605). In the measurement arm a mirror was mounted on a manual translation stage allowing it to be moved along the optical axis of the system. The light from both the reference and measurement arm is incident on a lens with a focal length of 100 mm, and the detector is a further 154 mm after this. A Lumenera camera, Lw235M, was used as the detector which has 1616 by 1216 pixels that are 4.4 µm square, giving a total detector size of 7.11 mm by 5.35 mm. Looking at Fig. 5(a) it can be seen that there is a bright vertical line around the point x = 180. This interference signal moves across the detector as the mirror is moved along the optical axis. In Fig. 5(b) the mirror has been moved along the axis by 750 µm and the interference signal has moved across the detector, being seen around the point x = 1245. Figures 5(c) and 5(d) show line plots along the pixel line y = 600. While in both cases here the interference is constructive giving a peak that is higher than the background signal, at other points the interference is destructive leading to a lower intensity than the background. Variation such as this was seen in the model as can be seen in the plots shown in figs. 5(e) and 5(f). In the numerical simulation the results for 1001 equally spaced wavelengths between 628.5 nm and 636.5 nm at used, with the grating
Fig. 5. Part (a) and (b) shows the intensity seen on the camera before and after a 750 µm shift of the mirror along the measurement arm, while parts (c) and (d) show the intensity along the pixel line y = 600 for parts (a) and (b) respectively. Parts (e) and (f) show the results for a simulation where in part (e) the centre of the envelope on the interference fringes is set close to the location of the peak in part (c), and part (f) shows the signal after a shift of 750 µm of the object in the measurement arm. It can be seen that, while the envelope is far wider, the shift in the interference signal along the x axis is close to that seen experimentally.
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34007
being in Littrow configuration for light at a wavelength of 632.5 nm. The plot in Fig. 5(e) was generated when ∆m = 0.41321x10−3 m, meaning the centre of the envelope on the signal is around the pixel x = 180, while Fig. 5(f) shows the result when ∆m has been reduced by 750 µm at which point it can be seen that the centre on the envelope on the signal is around x = 1350. 5.
Discussion and conclusions
Work in previous papers has described the effect of placing a tilted diffraction grating in the reference arm of an interferometer as applying a continuous optical delay in the direction of the grating dispersion. This appears to be a great simplification and does not clearly describe its effect, especially when a broadband light source is used, and as such we have carried out a detailed examination of the signal formation. While we have restricted ourselves to a very simple scatterer in the measurement arm, this still allows us to examine the effect of such a reference signal. It is also enlightening to make a comparison between the path the light takes via the reference arm and a Czerny-Turner spectrometer with a detector shifted back from the focal plane. It is clearly highly significant that light of different wavelengths is diffracted at different angles by the grating, which leads to it being focused to different points on the focal plane of the lens in front of the detector. This gives a wavelength dependent reference path, which when combined with the light from the measurement arm leading to the formation of an envelope on the interference fringes which moves as the scattering object is moved along the measurement arm, giving a signal that can be used to track its location. However, unlike the signal created in an instrument such as a white light interferometer, the shape that the interference signal takes within this envelope varies rapidly as the object moves. While the experimental results give a sharper peak in the interference signal than is predicted by the simulation, the signal does move in the manner predicted. The differences between the experimental signal and that predicted by the model can be attributed to numerous factors that the simple model does not accurately include, such as the large size of the light source (a 0.4 mm core to the multimode optical fiber) combined with an imperfect collimator will lead the light at each wavelength to be focused to a larger region than the sharp focal spot that is predicted by the model. Such an effect may lead to the interference fringes being somewhat washed out. Imperfect spacing of the elements, chromatic dispersion, and any tilt on the scattering measurement object will also lead to changes in the signal. However, there is sufficient correspondence to imply that the model is capturing the essential nature of the system. We note that when the object in the measurement arm is replaced by something with more complex features than used here, then the light falling on the screen will no longer take the form of a single spherical wavefront diverging from a point on the optical axis, however the light from the reference arm that it interferes with will remain exactly the same as in the case here. It is likely that the complex form of the reference signal will mean that the interpretation of signals generated on the detector will be fairly difficult, making it hard to get results that can be accurately assessed in the general case. Appendix A In this appendix we will justify the use of the measurement of the length of a ray in order to calculate the phase that is found at the detector. The argument will be made for a transmission grating, though the result should hold for reflection gratings with only slight modifications to take into account the effect of the reflective materials. The situation is illustrated in Fig. 6, though it should be noted that the field is more complex than is drawn in the region close to the screen. The fields away from the screen will take the form of a collimated bean being incident on it and a collimated beam diffracted at the angle given by the grating equation after it, both these cases are just extended back to the screen in the diagram.
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34008
Fig. 6. Two rays, ABC and DEF, passing through two adjacent slits in a diffraction grating. The ray GHD is an arbitrary ray between these two rays.
The first important point to note is that it is not the true path length that we are interested in, all we are interested in calculating is the phase at a point in space a significant distance away from the grating. In Fig. 6 the red lines represent the maxima in the wavefront both before and after the grating, which in this case takes the form of a periodic set of slits in a screen. The screen is assumed to perfectly adsorb the incident field at the points where is it present, and to leave the incident field unaffected at the points where there are holes in the screen. Thus if we know the field at points A or B in Fig. 6, we can calculate the field at points B and E. Beyond the screen the field is the same as that given by a set of coherent secondary source, and we assume the field at the centre of the slit is unchanged from that incident upon it from the left hand side. We again note that the field in the vicinity of the slits is more complex than illustrated in the figure, however away from the screen the behaviour of the light away from the screen is expected to be that of a flat wavefront propagating as a beam at the diffracted angle. Thus if we consider the length of the path along a ray through the centre of a slit, the phase is the same as that of the incident wave, and is the source of the scattered wave, thus knowing the distance AC, we can relate the phase at these two points. If we now consider ray DEF, which passes through that adjacent slit, we can see that there is an additional distance mλ in the path, where in the case illustrated clearly m = 1. Thus while the path length is different, the phase is the same. Thus if we take any ray through the centre of a slit and calculate the path length, while the distances may be different the phases will not. Thus there is nothing special about the path chosen in Fig. 2. Looking at a ray that starts on the wavefront between points A and D, and propagates parallel to ABC to a point between points C and F, as illustrated by ray GHI on Fig. 6, it can be seen that the length of the ray will not give the correct phase at point I, as the ray has the same length as ray ABC when it meets the blue dashed line parallel to the screen shown below point C. However, it can also be seen that the error in phase is linear, varying from 0 to 2π as the starting point moves from point A to point C. If the path to the screen is the same for all of the incident waves, then the error in phase will be the same for all wavelengths. Thus the maximum error when using this ray tracing method will correspond to all of the wavelengths being offset by the same amount of phase in the range 0 the 2π. If m is greater than one, the error in phase will pass through the range 0 to 2mπ times as the starting point moves from point A to D. Indeed a similar ray length type argument is made for broadband interferometric systems using pairs of gratings such as in [17] the results obtained do agree with the geometrical picture.
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34009
Appendix B In this appendix we will include details of the approximations that are made in order to obtain Eq. (9) from Eqs. (5) and (6). This very much follows the mathematics laid out very clearly in chapter 7 of [16] and is only included for completeness so that the approximations that have been made can be seen and allow the reader to make a judgment their accuracy and applicability for different systems. Starting from the fields incident on the detector as given by Eqs. 6 and 5. The intensity recorded by the detector I = [Emeas + Eref ][Emeas + Eref ]∗ , where the asterisk indicates the complex conjugate. The first approximation that we will apply is that (x − h)2 + y2 + z20 ≈ x2 + y2 + z20 in the denominator of the coefficient of the exponential term in Eq. (5), this is justified as z0 >>h, x, y. This approximation just corresponds to the condition that the variation in the field magnitude due to the different distances propagated is negligible. The intensity is then, q 2 ∗ 2 2 I{x} = (Eref + Emeas )(Eref + Emeas ) = A exp i k (x − h) + y + z0 + φref q ! + exp i k x2 + y2 + z20 + φmeas × c.c.
(14)
q q = 2A2 1 + cos k (x − h)2 + y2 + z20 − k x2 + y2 + z20 + φref − φmeas where A = E0 {λ}/(x2 + y2 + z20 ), and c.c is the complex conjugate of (Eref + Emeas ). To simplify the next step, the x coordinate is transformed to x̂ = x − h/2 s s 2 2 h h © ª 2 2 2 2 2 I{x̂} = 2A 1 + cos k x̂ − + y + z0 − k x̂ + + y + z0 + φref − φmeas ® (15) 2 2 « ¬ defining s r1 =
h x̂ + 2
2
s +
y2
+
z20 ,
r2 =
h x̂ − 2
2
+ y2 + z20
(16)
then
r22 + r12 2x̂h x̂h = ≈ r2 + r1 r2 + r1 z0 substituting this back into Eq. (16) and reverting to x gives, (x − h/2)hk 2 I{x} = 2A 1 + cos + φref − φmeas z0 ∆r = r2 − r1 =
(17)
(18)
Funding Engineering and Physical Sciences Research Council (EP/P006930/1); Renishaw PLC/Royal Academy of Engineering (RCSRF1516/2/7). References 1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). 2. J. M. Schmitt, “Optical coherence tomography (oct): a review,” IEEE J. Sel. Top. Quantum Electron. 5(4), 1205–1215 (1999). 3. C. A. Puliafito, M. R. Hee, C. P. Lin, E. Reichel, J. S. Schuman, J. S. Duker, J. A. Izatt, E. A. Swanson, and J. G. Fujimoto, “Imaging of macular diseases with optical coherence tomography,” Ophthalmology 102(2), 217–229 (1995).
Research Article
Vol. 27, No. 23 / 11 November 2019 / Optics Express 34010
4. M. R. Hee, J. A. Izatt, E. A. Swanson, D. Huang, J. S. Schuman, C. P. Lin, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography of the human retina,” Arch. Ophthalmol. 113(3), 325–332 (1995). 5. H. Pahlevaninezhad, M. Khorasaninejad, Y.-W. Huang, Z. Shi, L. P. Hariri, D. C. Adams, V. Ding, A. Zhu, C.-W. Qiu, F. Capasso, and M. Suter, “Nano-optic endoscope for high-resolution optical coherence tomography in vivo,” Nat. Photonics 12(9), 540–547 (2018). 6. Z. Ding, Y. Zhao, H. Ren, J. S. Nelson, and Z. Chen, “Real-time phase-resolved optical coherence tomography and optical doppler tomography,” Opt. Express 10(5), 236–245 (2002). 7. J. Welzel, “Optical coherence tomography in dermatology: a review,” Ski. Res. Technol. 7(1), 1–9 (2001). 8. Y. Yang, P. O. Bagnaninchi, M. Ahearne, R. K. Wang, and K.-K. Liu, “A novel optical coherence tomography-based micro-indentation technique for mechanical characterization of hydrogels,” J. R. Soc., Interface 4(17), 1169–1173 (2007). 9. I. Zeylikovich, A. Gilerson, and R. Alfano, “Nonmechanical grating-generated scanning coherence microscopy,” Opt. Lett. 23(23), 1797–1799 (1998). 10. I. Zeylikovich, Q. Liu, G. Bai, N. Zhadin, A. Gorokhovsky, and R. Alfano, “Interferometric 2d imaging amplitude correlator for ultrashort pulses,” Opt. Commun. 115(5-6), 485–490 (1995). 11. Y. Watanabe, K. Yamada, and M. Sato, “Three-dimensional imaging by ultrahigh-speed axial-lateral parallel time domain optical coherence tomography,” Opt. Express 14(12), 5201–5209 (2006). 12. Y. Watanabe, K. Yamada, and M. Sato, “In vivo non-mechanical scanning grating-generated optical coherence tomography using an ingaas digital camera,” Opt. Commun. 261(2), 376–380 (2006). 13. Y. Watanabe, Y. Takasugi, and M. Sato, “Axial-lateral parallel time domain oct with an optical zoom lens and high order diffracted lights at 830 nm,” in Coherence Domain Optical Methods and Optical Coherence Tomography in Biomedicine XII, vol. 6847, (International Society for Optics and Photonics, 2008), p. 68472K. 14. Y. Watanabe, Y. Takasugi, K. Yamada, and M. Sato, “Axial-lateral parallel time domain oct with an optical zoom lens and high order diffracted lights for variable imaging range,” Opt. Express 15(8), 5208–5217 (2007). 15. M. Czerny and A. Turner, “Über den astigmatismus bei spiegelspektrometern,” Eur. Phys. J. A 61(11-12), 792–797 (1930). 16. M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, interference and diffraction of light (Elsevier, 2013). 17. A. Henning, J. Williamson, H. Martin, and X. Jiang, “Improvements to dispersed reference interferometry: beyond the linear approximation,” Appl. Opt. 58(1), 131–136 (2019).