Modern technologies offer new opportunities for experimentalists in a variety of research areas of fluid dynamics. Improvements are now possible in the state-of-the-art in precision, dynamic range, reproducibility, motion-control accuracy, data-acquisition rate and information capacity. These improvements are required for understanding complex turbulent flows under realistic conditions, and for allowing unambiguous comparisons to be made with new theoretical approaches and large-scale numerical simulations.
One of the new technologies is high-performance digital holography. State-of-the-art motion control, electronics and optical imaging allow for the realization of turbulent flows with very high Reynolds number (more than 107) on a relatively small laboratory scale, and quantification of their properties with high space–time resolutions and bandwidth. In-line digital holographic technology can provide complete three-dimensional mapping of the flow velocity and density fields at high data rates (over 1000 frames per second) over a relatively large spatial area with high spatial (1–10 μm) and temporal (better than a few nanoseconds) resolution, and can give accurate quantitative description of the fluid flows, including those of multi-phase and unsteady conditions. This technology can be applied in a variety of problems to study fundamental properties of flow–particle interactions, rotating flows, non-canonical boundary layers and Rayleigh–Taylor mixing. Some of these examples are discussed briefly.
In 1893, John Williams Strutt, later Lord Rayleigh, performed a series of experiments on the determination of the absolute densities of atmospheric gases with an unprecedented accuracy of one hundredth of a per cent. For direct measurements of pressure that extended over very small ranges, he designed and constructed a manometric gauge, an apparatus offering several points of novelty, mechanical precision and advanced optical diagnostics. Rayleigh noticed that the density of nitrogen extracted from air disagreed with that of nitrogen released from chemical reactions, particularly prepared from ammonia (Rayleigh 1893). This discrepancy amounted to one-half of a per cent and might not have been noticed by a less scrupulous scientist. However, the result was beyond measurement error and definitive. These observations attracted the attention of William Ramsay, who suggested that atmospheric nitrogen may contain an admixture of another heavier gas. The idea was audacious, as the composition of air had been studied extensively during those times. Support for this idea came unexpectedly from another highly accurate experiment. As early as 1785, when applying electric sparks to atmospheric nitrogen and oxygen in the presence of alkali, Henry Cavendish reported that about 1/120 of the bulk of air did not participate in chemical reactions (Cavendish 1785). Repeating Cavendish’s experiments, applying various techniques for extracting atmospheric nitrogen and systematically studying its physical and chemical properties, Rayleigh and Ramsay reported the discovery of a new, chemically neutral, monatomic gas that was named argon (‘inactive’ in ancient Greek). In the following years, there followed the discoveries of helium, xenon, neon and krypton; in 1899, in studies of radioactive decay of thorium, radon was found, completing the eighth group of chemical elements in Mendeleev’s periodic table. A release of 300 ml of xenon required processing of 77.5 million litres of air. The delicacy and precision of Rayleigh’s investigations were extraordinary (Rayleigh & Ramsay 1895).
We recount this bit of history here as a classic example of how high-precision quantitative measurements may lead to scientific discoveries and reveal qualitative secrets of nature that go far beyond a large collection of perceptible facts. This lesson also has implications for turbulent mixing.
Turbulent motion of fluids is often characterized by non-equilibrium heat and mass transport and sharp gradients of density and pressure, and may be a subject to spatially varying and time-dependent acceleration or rotation (e.g. Reynolds 1894; Tennekes & Lumley 1972; Frisch 1995; Sreenivasan 1999; Abarzhi 2008). Its theoretical description is challenging, and its complexity encourages large-scale numerical simulations. On the experimental side, such turbulent flows are hard to study systematically in a well-controlled laboratory environment (e.g. Bradshaw 1971; Meshkov 2006). Their sensitivity to details and the transient character of the dynamics impose constraints on the accuracy and spatio-temporal resolution of the measurements of the flow quantities, as well as on the data-acquisition rate. Despite several state-of-the-art experiments in the past, the question ‘how to quantify these flows reliably’ remains still open.
Even for the simplest canonical turbulent flows, whose dynamics is statistically steady, existing experimental capabilities do not provide sufficient information for a precise and quantitative verification of predictions of theories and models (Kolmogorov 1941a,b; Batchelor 1953; Monin & Yaglom 1975; Barenblatt 1979; Pope 2000). The theories and models, in contrast, frequently become either too mathematical or needlessly empirical, involve a multiplicity of adjustable parameters and do not always identify a set of robust parameters that need to be diagnosed precisely (e.g. Reynolds 1883; Barenblatt & Goldenfeld 1995; Abarzhi 2008; Procaccia & Sreenivasan 2008). Massive information is usually available in numerical simulations, which can track all predetermined flow quantities (e.g. Kaneda et al. 2003; Dimonte et al. 2004; Donzis et al. 2005; Arneodo 2008). However, compared with experiments, simulations are less informative about the physical phenomena, as they sample a mathematical model and do not always quantify the amount of error, which may be substantial, especially when the turbulent processes depart from familiar scenarios. To identify the universality and randomness of realistic turbulent processes and to extend our knowledge of turbulence beyond idealized considerations, one requires new experimental methodologies that are capable of providing adequate statistics and that augment existing approaches in non-canonical circumstances with tight control over the experimental parameters (Orlov & Abarzhi 2007).
The methods of experimental diagnostics of fluid turbulence have improved steadily over time. Until the late 1920s, the flow measurements were limited to time-averaged properties such as the average velocity and pressure differences (e.g. Reynolds 1883; Schlichting 1956; Bradshaw 1971). Valuable insights into turbulence characteristics were provided by Prandtl (1925), who perfected qualitative flow visualization techniques (Prandtl & Tietjens 1934). Schlieren and shadowgraph techniques became widespread in flows with density differences (e.g. Settles 2001). In the 1950s, thermal anemometry became a basic metrological tool as it allowed for the quantification of the rapid velocity fluctuations, by measuring the change in resistance of a small probe, when slightly heated and placed in the flow (e.g. Bruun 1995). A little later, laser Doppler velocimetry became available (e.g. Durst 1981). These traditional turbulence measurements are made at one (or, a few) point(s) in the flow and yield the temporal dependencies of the velocity, temperature or other fields. Spatial properties of turbulent dynamics along the direction of the mean velocity are often derived from the temporal linear traces via Taylor’s hypothesis (Taylor 1935), whose limitations are not completely understood, even in the case of simple canonical flows (Lumley 1965; Badri Narayanan et al. 1977; Douady et al. 1991; Sreenivasan 1991). Indeed, the assumptions involved in the hypothesis make it hardly applicable under transient conditions (Abarzhi 2008). A broad introduction to the traditional methods of flow visualization and measurement techniques is given in Goldstein (1996) and Tavoularis (2009).
In the past 20 years, two major accomplishments in experimental diagnostics of turbulent flows are particle image velocimetry (PIV; e.g. Dudderar & Simpkins 1977; Meynart 1980; Adrian 1991; Willert & Gharib 1991; Raffel et al. 2007) and planar laser-induced fluorescence (e.g. PLIF; Sreenivasan et al. 1989; Seitzman & Hanson 1993), applied to measure velocity and passive scalars, respectively. Volumetric measurement of the scalar volume has also been attempted (e.g. Prasad & Sreenivasan 1990; Buch & Dahm 1996, 1998). In high Reynolds number flows, characteristic velocities and accelerations are high and orient themselves increasingly more rapidly, and, to follow the flow, the seed particles tracking the motion should be relatively small. The small scattering cross section and low light-collection efficiency, combined with the short-time exposures needed to capture images without blur, led to the use of high-intensity lasers. For better resolution of small-scale dynamics, the flow is seeded with multiple particles, and the auto-correlation of double exposure is applied for processing images of the particle fields (e.g. Adrian 2005). Conventional particle techniques analyse data taken from a single plane within the volume of fluid flow and provide two-dimensional velocity distributions. More advanced stereoscopic PIV allows for a derivation of a third out-of-plane velocity component in a single or in a few planes (Lecerf et al. 1999). These tools have played an important role in studies of a variety of practical problems including atmosphere pollution, combustion in diesel engines, particle segregation and clustering (Bec et al. 2006). They need further development for studies of fundamental aspects of canonical turbulent flows, such as small-scale intermittency or the connection between Lagrangian and Eulerian descriptions, in part due to resolution limitations of the measurement technology (Crawford et al. 2005; Donzis et al. 2005; Schumacher 2007). Grasping essentials of the turbulent dynamics in non-canonical circumstances will certainly require novel approaches for flow visualization and for the measurement of turbulent flow quantities.
As in all natural processes, turbulent transport obeys laws of conservation of mass, momentum (angular momentum, if applicable) and energy (Landau & Lifshitz 1987a,b). In the description of canonical Kolmogorov-type turbulence, mass and momentum are conserved owing to conditions of homogeneity, isotropy and locality. In the asymptotic regime of statistically steady flows, when the influence of initial and boundary conditions is regarded as negligible, the flow is driven solely by the transport of kinetic energy (Kolmogorov 1941a,b). Energy is conjugated with time, and space–time properties are related by the Taylor hypothesis, thus allowing for the interpretation of experimental data, such as linear tracers in thermal anemometry or instantaneous snapshots of particle fields in PIV, as markers of turbulent dynamics. Substantial dynamic range, extending over several decades, is required for accurate quantification of power laws describing turbulent dynamics, whereas large values of macroscopic hydrodynamic numbers, such as the Reynolds number, ensure that the flow is truly turbulent (e.g. Sreenivasan & Meneveau 1988; Jayesh & Warhaft 1992).
In contrast to the classical Kolmogorov scenario, realistic turbulent processes are often statistically unsteady and are governed by (in general, independent) transport laws for mass, momentum and potential and kinetic energies. Momentum is conjugated with space, and to capture momentum transport, one has to diagnose spatial distributions of the velocity and density fields, along with their temporal dependencies (Landau & Lifshitz 1987a,b; Abarzhi 2008). Furthermore, for accurate experimental quantification of space–time properties of non-equilibrium dynamics, these measurements would require relatively high spatial accuracy and temporal resolution, sufficient for the evaluation of spatial and temporal derivatives of the flow quantities, whereas dynamic range and data-acquisition rate should be adequate for providing ample statistics from a single experimental run (Abarzhi 2008). Compared with canonical cases, repeatable implementation of non-equilibrium turbulent flows in the laboratory environment requires tighter control of the macroscopic parameters, as well as of the initial and boundary conditions owing to sensitivity and the transient character of their dynamics.
Reliable quantification of such turbulent processes appears to be a formidable experimental task. Current flow-diagnostics technologies restrict application to nearly canonical flows, whereas there is a substantial need, in both practical applications and validation of new theoretical approaches and large-scale numerical simulations, to understand complex turbulent flows under realistic conditions, including multi-phase, buoyancy-driven and high Reynolds number flows, and to provide adequate experimental description for their scaling, structure and statistics (e.g. Kaneda et al. 2003; Dimonte et al. 2004; Meshkov 2006; Abarzhi et al. 2007; Arneodo 2008; Benzi et al. 2008). Application of modern technologies developed in several optical engineering and industrial fields can potentially offer significant leverage in this direction (Orlov 2008). Enhancement by orders of magnitude in accuracy, precision, reproducibility, control, data-acquisition rate and information capacity can be envisaged using these new technologies, so that the quality of experimental data can be brought to higher standards.
This review is structured as follows. In §3, the basics of digital holographic storage technology are discussed with emphasize on the aspects that are most relevant to the design and actual implementation of the high-performance hydrodynamic experiments, including advanced optical imaging, high-precision motion control and digital signal processing. In §4, we describe digital holography and several three-dimensional imaging techniques with a particular focus on their application for holographic PIV (HPIV). Section 5 outlines two laboratory-scale integrated experimental platforms for studies of hydrodynamic instabilities and turbulence in rotating flows at high Reynolds numbers that may incorporate these technologies, both for precise realization of the complex flows and for advanced three-dimensional holographic diagnostics. We conclude the review with a brief summary in §6.
3. Digital holographic technology
In hydrodynamics experimental systems, the quality of flow diagnostics (e.g. optics and imaging) needs to be adequate to characterize the complex fluid flows realized by sophisticated mechanical and motion-control technologies (e.g. wind tunnels, laboratory-scale mechanical setups, etc.). This situation is similar to digital holographic storage, which relies on several enabling technologies including precision mechanics, motion control, lasers, high-resolution optics and digital signal processing. The performance of each of these components has to be closely matched in order to maximize the holographic system capabilities, while emphasizing only one or few of the aspects of the system is not sufficient to obtain the ultimate system-performance characteristics. We will provide an overview of how the capabilities of advanced holographic technology have evolved in the last 10–15 years and how techniques developed in digital holography may be applied to improve the quality and precision of the hydrodynamics experiments.
(a) Principles of information recording, storage and retrieval via holography
Holography provides the physical ability to reconstruct an object’s wavefront from the imprinted fringes, created by the interference of two optical beams in a photosensitive material (Gabor 1948; Leith & Upatnieks 1962; Denisyuk 1963). In digital holographic data storage, information is presented in the form of pixelated pages, which can be rather large (of the order of 1 million pixels), stored in the form of volumetric gratings. The data-page storage is performed by intersecting two coherent laser beams within the photosensitive storage material (figure 1). The object beam contains the information to be stored, and the reference beam is designed to be simple to reproduce, for example, a collimated beam with a planar wavefront, or a spherical wave. The resulting optical interference pattern causes chemical and/or physical changes in the photosensitive medium: a replica of the interference pattern is stored as a change in the refractive index, absorption or thickness of the photosensitive medium. When the stored interference grating is illuminated with a reference beam, some fraction of the incident light is diffracted by the grating, thus reconstructing the original signal pattern that contains the stored data.
Owing to the Bragg selectivity of volume holograms, a large number of data pages can be superimposed in the same location of the medium and accessed independently by probing with an appropriate reference beam, as long as the gratings belonging to the different pages are distinguishable. The process of superimposing multiple holograms is referred to as multiplexing. Basic multiplexing techniques include the angular method (Leith et al. 1966; d’Auria et al. 1973), in which the reference beams differ by the angle of their incidence, the wavelength method (Leith et al. 1966; Yu et al. 1991; Rakuljic et al. 1992), in which the references differ in their optical wavelengths, and the shift method (Psaltis et al. 1995), in which the same reference beam of a curved wavefront (e.g. spherical) is used, but the recording and readout are performed at different lateral positions of the medium.
Theoretically, holography offers high-volumetric storage density with an upper limit of approximately V/λ3 (van Heerden 1963), where V is the material volume and λ is the light wavelength, which translates into several terabits per cubic centimetre of the storage medium. Data pages can contain large numbers of data pixels and, practically, up to several millions of bits per page have been demonstrated. Since a page is stored and recalled as a whole, data-transfer rates can be extremely high, exceeding 10 Gb s−1. Among other unique properties of holographic data storage is the possibility of realizing an extremely fast data-access time, as short as 50 μs or less, because the optical reference beam can be moved very rapidly without any inertia.
(b) High-resolution imaging and digital signal processing in holographic data storage
The holographic pages usually include modulation-channel coding and error correction in order to make the data more robust, but it is very important that the image of the data page is as close to the original object as possible. During the readout of the stored holographic data, the pixels in the image captured by the camera have to match the pixels of the charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) detector with approximately ±1 μm accuracy. Any optical aberrations and distortion in the imaging system or defocus of the detector array would spread energy from one pixel to its neighbours and introduce errors in the retrieved data. To provide high-quality imaging, several architectures employing different optical arrangements have been employed (figure 2).
Double Fourier transform imaging (figure 2b) usually outperforms other architectures and provides the best-quality images with lowest distortions for the same level of optical complexity compared with other solutions. The imaging optics used in the real high-performance holographic storage platforms is rather sophisticated and is designed with multiple lens elements in order to realize the required low-distortion crisp imaging (figure 3).
A distinctive feature of the digital holography is that each pixel in a data page (figure 4) is treated as a unique data channel, and its spatial position is controlled with nearly submicron accuracy throughout the image, whereas the digitized pixel values are measured with high precision (8 bits or higher), resulting in high spatial resolution and signal-to-noise ratio (SNR) over the entire image.
In digital holographic storage, the concepts of image data fidelity and the information capacity of the imaging data are treated rather rigorously and quantitatively and are characterized by the bit-error-rate (BER) and, more generally, by the SNR. The BER is the fraction of bits that are detected incorrectly after passing through the data channel when the bitstream size approaches infinity. In practice, the BER can be evaluated statistically by using the histogram of the received signal amplitudes constructed from a sufficiently large dataset: the CCD pixels are divided into two populations: those corresponding to 0s (OFF pixel) and 1s (ON pixel) of the original image (figure 5). The measured pixel intensities can be used to compute the normalized probability density functions W0(x) and W1(x) of the received signal strength for 0s and 1s, respectively, and to compute the BER as the integral over the overlapping regions of the distributions.
In holographic storage, the SNR is defined and evaluated numerically as3.1where μ1 and μ0 are the average values of detected 1s and 0s of an initially binary image, respectively, and σ1 and σ0 are the corresponding statistical variances. The SNR is the parameter that determines the BER in communications and data storage. In a more general sense, the SNR governs the information capacity of any experimental datasets, irrespective of their physical representation—whether the data are instantaneous snapshots, such as images obtained in PLIF or PIV, or one-dimensional temporal traces such as in thermal anemometry.
(c) The Stanford digital holographic storage system
The development of data storage systems based on holography started in the early 1960s, stimulated by the invention of the laser as well as by the pioneering work of van Heerden (1963), which postulated that storing an interference pattern in a three-dimensional medium can be used to store and retrieve digital information. The first systems built at Bell Labs, International Business Machines (IBM) and Radio Corporation of America (Anderson 1968; Lipp & Reynolds 1970; Steward et al. 1973) employed photographic films or plates, in which the recording layer was rather thin, making the massive multiplexing in the same volume difficult.
In the 1990s, a new wave of research and development activity in the field of holographic memories was triggered by the overall need in high-capacity archival data storage and by the rapid progress in the development of enabling technologies such as the compact integrated SLMs and CCD cameras and the all-solid-state lasers. These enabled the design and construction (at relatively low cost) of holographic storage systems with as many as 1 million pixels per page, in which the data pages could be read at video or higher rates (Hong et al. 1995). Further advancements in optics, materials and digital signal processing ultimately allowed a density as high as 388 bits μm−2 to be attained (Burr et al. 2001), proving the potential of the holographic technology for ultra-high-density storage. In parallel with systems development, a significant progress has been made in developing high-sensitivity photopolymer media (Waldman et al. 1997; Dhar et al. 1998) for high-density holographic recording. Novel approaches have been introduced in both holographic multiplexing methods and imaging optics. New techniques, such as correlation (Darskii & Markov 1988), fractal (Psaltis et al. 1990), peristrophic (Curtis et al. 1994) and shift (Psaltis et al. 1995) multiplexing, were introduced and developed, while the introduction of high numerical aperture (NA) optics in holographic recording (Orlov et al. 2004) allowed for high individual data-page densities (more than 0.5 bit μm−2 per page).
The 1 Gb s−1 digital holographic system was developed at Stanford in the late 1990s and demonstrates the state-of-the-art holographic technology capabilities. It incorporates unique state-of the-art custom imaging optics, holographic image-processing electronics, high-precision electro-mechanical and motion control technology, a dedicated high-speed CCD camera and synchronization electronics and sophisticated control software (figures 6 and 7).
The complete design of a digital holographic storage system is complex since multiple performance trade-offs have to be balanced simultaneously and accounted for (see Mikaelian et al. 1970; Bernal et al. 1996; Coufal et al. 2000; Orlov et al. 2004). Simultaneous requirements of high storage density and high data rate represent particular challenges since the optical signal strength of the images (which needs to be rather high for high readout speed) drops with the number of superimposed holograms, which, in turn, directly affects the storage density. Another important aspect is the implementation of precise and accurate motion control of the system, which becomes a substantial challenge since the holograms have to be transported under the holographic head at approximately 1 m s−1 linear velocity of the medium with a few tens of nanometres mechanical accuracy and with timing precision of a few tens of nanoseconds.
Holographic recording and readout are performed with a pulsed frequency-doubled Nd:YAG laser (532 nm, 500 μJ per pulse and 20 ns pulse length). A 1024×1024 IBM ferroelectric liquid crystal SLM was pixel-matched to a 1000 frames per second (fps) digital Kodak C7 CCD camera. The holographic disc, comprising the 200 μm thick photopolymer medium, sandwiched between the two optically flat glass substrates, is mounted on a precision air-bearing spindle capable of up to 10 000 r.p.m. The rotation rate, stabilized through the active feedback digital electronic control, is typically better than 0.01 per cent, while the air-bearing mechanical stiffness provides the reproducible frictionless motion with a radial accuracy of 10 nm or better. The stability of the radial positioning during the disc rotation is actively monitored using a pair of high-sensitivity micro-electro-mechanical accelerometers (figure 7) employed in the differential mode to improve the detection capability of any mechanical imbalances. Addressing of different angular positions on the disc is performed using a precision laser shaft encoder, which provided up to 16 384 locations along the disc with a few nanoseconds (typically ±5 ns) timing accuracy. The shaft encoder and synchronization electronics allowed addressing any angular location on the disc with an accuracy and repeatability of better than ±20 nm. In order to access the different radial positions of the disc, the spindle was mounted on a high-precision mechanical stage with positioning repeatability of ±25 nm.
Real-time digital signal processing and holographic channel decoding were performed by custom high-speed reprogrammable electronics. The megapixel images from the camera are processed on-the-fly by the electronics at 1000 fps, providing the real-time channel decoding, data de-interleaving and error correction.
The SLM is optically imaged onto the CCD array using a short-focal-length low-distortion (less than ±1.0 μm over the entire CCD field) custom-built optical double Fourier transform lens pair (figures 3 and 4). The total NA of 0.75 is divided between the central, high-resolution, low distortion portion (NA=0.36), used by the SLM, and the outer area, used by the reference light.1 The signal and reference beams pass through the same set of optics, minimizing the physical size of the optical system and eliminating mechanical constraints typical for high NA optical systems. In order to minimize the imaging distortion, the optical design is made fully symmetrical using an optical compensator plate, whose thickness equals that of the disc; the plate is incorporated into the second objective lens (figure 3).
A series of performance demonstrations were carried out using this system. In the first demonstration, the source user data were encoded into digital holographic pages, stored in the holographic disc, retrieved at 1 Gb s−1 from a disc rotating at 300 r.p.m. (i.e. approx. 1 m s−1 linear velocity with respect to the optical head) and decoded on-the-fly by the electronics (Orlov et al. 2004). The decoded data were captured and converted back into original file format (JPEG colour images). A sample hologram readout and decoded at 1 Gb s−1 is shown in figure 8. In a later demonstration, 12 parallel uncompressed digital video streams were encoded and stored in the holographic disc, and subsequently retrieved, at 0.65 Gb s−1, electronically decoded on-the-fly and simultaneously displayed on three computer monitors (four video streams per computer).
The hologram’s signal strength and the timing precision of the components and control hardware allowed for much higher optical data rates, given that the holograms could be physically transported and moved under the optical readout head and addressed with sufficient speed and accuracy. In high data-rate experiments (Orlov et al. 2004), sustained optical readout data rates as high as 10 Gb s−1 were demonstrated (figure 9).
As of today, the demonstrated 10 Gb s−1 optical transfer rate represents the highest data rate ever produced by any optical storage device. This, along with other unique attributes of the system, such as fully hardware-implemented 1 Gb s−1 holographic channel electronics and a high-resolution optical-imaging system, makes the Stanford holographic system an important and yet unsurpassed milestone in the development of digital holographic data-storage technology.
4. In-line digital holography and three-dimensional imaging
This section presents several specific implementations of holographic imaging methods, including one of the most common holographic measurement techniques that has been employed in experimental fluid mechanics: HPIV. While the traditional HPIV involved recording holograms from the flow-marking particles on holographic medium or plates (Barnhart et al. 1994; Meng & Hussain 1995a,b; Zhang et al. 1997; Sheng et al. 2003), as new digital sensors and computation methods have been developed, it became apparent that recording the interferogram on a digital camera rather than film was advantageous, provided that the interferogram was being sampled sufficiently. Owing to its relative simplicity compared with the traditional HPIV, digital HPIV evolved to become more attractive and popular. As in conventional HPIV, complete three-component three-dimensional (3C-3D) velocity vector fields of the flow can be characterized by the digital HPIV with high spatial and temporal resolution, and accurate quantitative description of the fluid flow can be obtained. There are many different types of HPIV and other holographic techniques in fluid velocimetry, such as digital speckle pattern interferometry (Bhaduri et al. 2006) and optical diffraction tomography (Lobera & Coupland 2008). In this section, we focus the attention of the reader on some aspects of in-line digital HPIV. The in-line digital HPIV requires a relatively simple setup and, at the same time, has a unique capability to perform a three-dimensional measurement. Besides, upon some modifications enabling three-dimensional diagnostics, numerous particle-tracking techniques traditionally applied in PIV can still be employed in HPIV. For a more detailed state-of-the-art and history on various HPIV methodologies, as well as for an outline of technical issues and comparison of performance, the reader is referred to Hinsch (2002, 2004) and references therein.
Multiplexed volume holographic imaging (VHI) is another digital three-dimensional imaging method. Multiplexed VHI captures longitudinally distributed multiple focus planes and rearranges them laterally onto the image plane. This mapping is implemented in optics by using volume holographic diffractive optical elements. Multiplexed VHI has the potential to extend the capabilities of the traditional two-dimensional PIV systems to true three-dimensional imaging.
(a) In-line digital holography
Digital holography revolutionized holography in many ways since it was first implemented (Goodman 1967; Goodman & Lawrence 1967; Schnars & Jüptner 1994). Digital holograms are recorded by digital detectors instead of holographic media and are reconstructed numerically. Although holograms are recorded with the same optical setup as in traditional holography, the digital reconstruction process is significantly more convenient, and eliminates the need of a physical holographic medium. In addition, digital holography is capable of capturing holograms over an extended period of time, allowing to capture ‘holographic movies’, i.e. record and store a succession of digital holograms. Thus, dynamic events can be observed with fine temporal and three-dimensional spatial resolution. As various numerical/statistical analyses, digital processing, as well as digital compensation of optical aberrations can be easily incorporated with reconstructed images or holograms, digital holography provides a versatile analysis tool. Digital holography has been studied extensively and implemented in a variety of applications such as topology measurement of micro-optics (Charrière et al. 2006), morphology analysis of cells (Marquet et al. 2005), refractive index measurements (Rappaz et al. 2005) and imaging colloidal particles (McGorty et al. 2008).
As in traditional holography, both in-line (Gabor 1948) and off-axis (Leith & Upatnieks 1962; Denisyuk 1963) configurations are employed in digital holography. For particle imaging, in-line geometry is usually preferred for the following reasons: (i) forward scattering from particles is captured, which is significantly stronger than side scattering (Thompson 1989), (ii) since particles are sparsely distributed in the volume of interest, the reference wave is unaffected, and the uniform background and halo signals do not severely deteriorate the reconstructed images, and (iii) better lateral resolution is achieved than in the off-axis configuration because higher spatial frequency components are better preserved in reconstruction. However, controlling the intensity ratio of object-to-reference waves is easier in the off-axis geometry. Hence, interference fringes of high contrast and higher SNR can be achieved. The reconstructed images of particles appear differently in the in-line and off-axis geometries, and the off-axis geometry may allow using smaller particles and at higher densities than the in-line geometry. To date, many different optical configurations have been implemented to balance SNR in HPIV (Bernal & Scherer 1993; Hussain et al. 1993; Zhang et al. 1997; Sheng et al. 2003).
A typical optical setup of in-line digital holography is shown in figure 10. A reference wave R(x,y), typically a plane or spherical wave of uniform amplitude, illuminates particles in the volume of interest. As the reference wave propagates, scattering from the particles (denoted by an object wave O(x,y)) produces spherical wavefronts. In particle imaging, particles are small and sparsely distributed, and the largest fraction of the reference wave is not affected by the particles; instead, it keeps propagating to the detector. Interference is created by the object wave and the reference wave, and is captured by the digital detector. In order to control lateral magnification, imaging optics may be placed in front of the detector.
The captured holograms contain phase information of the object waves, and digital reconstruction decodes the phase information. The hologram signal is multiplied by the reference wave to obtain the reconstructed wave, and this process is equivalent to illuminating the hologram with the reference wave in optical reconstruction. Light propagation from the reconstructed wave is then computed in the reverse fashion (Goodman 1996).
Since the typical size of the detector (NΔ in figure 10) is much smaller than the propagation distance from the particle, the paraxial approximation can be applied and the light propagation can be computed using the Fresnel diffraction formula (Goodman 1996). In most cases, the Fresnel diffraction formula is implemented in the spatial frequency domain via fast Fourier transform because of simple and faster computation, and the Fresnel kernel for various propagation distances can be pre-computed and stored in memory. We emphasize that the Fresnel approximation is applicable only in low NA systems. If the NA is greater than approximately 0.5, then more rigorous reconstruction methods are required, such as direct evaluation of the Rayleigh–Sommerfeld diffraction integral (Zhang et al. 2006) and the Kirchhoff–Helmholtz transform (Kreuzer et al. 1992).
Computing the Fresnel diffraction for sequential values of the propagation distance z provides the entire three-dimensional field distribution sampled at the respective planes. If the distance z coincides with the actual propagation distance of a scattering source, a crisp image of a particle appears in the reconstructed image, similar to an in-focus image in traditional photography. Digital holography can be performed in real time or off-line and achieves digital refocusing in reconstruction without any form of opto-mechanical scanning. This is a unique property of digital holography that allows for determination of particle locations in three-dimensional space, even from a single hologram.
The lateral resolution is determined by the maximum spatial frequency that can be measured. The axial resolution is relatively poor compared with the lateral resolution, and identification of the axial position requires careful estimation. As a rule of thumb, the lateral resolution is proportional to the optical wavelength and is inversely proportional to the NA of the system, whereas the axial resolution is inversely proportional to (NA)2, similar to many of the free-space optical imaging systems. The actual resolution depends on the specifics of the optical system such as aberrations and noise. The resolution is dependent on the space–bandwidth product and the distance to the reconstruction plane, and, furthermore, it is non-uniform across the reconstructed volume. This phenomenon is common in many cases of axial imaging and, usually, is not severe enough to merit concern, provided that the axial reconstruction range is smaller than the mean distance to the camera. The sample volume or depth of field also depends on the resolution and SNR. More detailed dicussion on the resolution and the sample volume can be found elsewhere (Hinsch 2002; Meng et al. 2004).
(b) In-line digital holographic particle image velocimetry
In order to obtain the 3C-3D velocity fields from holograms, different methods have been developed: particle tracking, intensity correlation (Barnhart et al. 1994), complex field correlation (or object conjugate reconstruction) (Coupland & Halliwell 1992; Barnhart et al. 2002) and tomographic reconstruction (Soria & Atkinson 2008). Particle tracking is the simplest method: from the reconstructed field or intensity, all particles are detected and tracked over different time frames so that the 3C-3D velocity fields can be measured. The overall system performance is determined by the accuracy in identifying particles and finding their locations. In correlation methods, such as intensity or complex field cross-correlation, the 3C-3D fluid velocity fields are directly recovered from the cross- or auto-correlation of the reconstructed intensity or complex field, respectively, without detecting individual particles. In tomographic reconstruction, multiple holograms captured in different directions reconstruct three-dimensional intensity tomographically and find particles. Each of these methods has its own advantages and disadvantages and also technical issues that depend on resolution, accuracy, sample volume, scale of experiments, size and number of particles, etc. More detailed discussion on these competing techniques can be found in Hinsch (2004). In this section, we focus on the particle-tracking method because it is intuitive and does not depend on a particular imaging implementation (e.g. holographic, tomographic, etc.).
In order to obtain the 3C-3D fluid velocity fields, after multiple holograms were captured over time (the time period between holograms should be known for the velocity computation), all holograms are reconstructed numerically. Then, particles are identified and their three-dimensional positions are estimated using signal and image processing, as shown in figure 11. In identifying and locating particles, it is a significant advantage that rotationally symmetric particles are traced and their size variation is minimal. Simple pattern matching techniques (e.g. the centroid of a circular intensity image as in concise cross-correlation, Pu & Meng 2000) can rigorously identify the particles and provide a good estimate of their lateral positions. The reconstructed three-dimensional optical field distributions are employed in estimating the axial position. To estimate accurately the axial distance z, various focus metrics have been proposed, including spectral l1 norms (Li et al. 2007), integration of the amplitude modulus (Dubois et al. 2006), decomposition of the object image with the Fresnelet bases and sharpness metric (Liebling & Unser 2004), iterative techniques (Yu & Cai 2001; Thelen et al. 2005) and particle extraction using complex amplitude (Pan & Meng 2003).
After all the particles are identified, particles in two neighbouring time frames should be matched to find their correspondence. At this stage, one can employ numerous techniques applied in traditional PIV. Computing cross-correlation of images (reconstructed images in digital HPIV) is often used, in which three-dimensional cross-correlation of a subimage finds the displacement of particles. This image-based methodology is simple and intuitive and can use either amplitude or intensity of the signal (Wormald & Coupland 2009), but its accuracy is limited, as subimages should be properly chosen for optimal system performance. Alternatively, the coordinates of the particles can be employed to find correct particle correspondence. There are two types of coordinate-based matching methods (Meng et al. 2004): (i) the synthesized particle-image method that computes correlation of synthesized particles (Pereira & Gharib 2002) and (ii) direct coordinate-based methods pairing particles with genetic algorithms (Sheng & Meng 1997), concise cross-correlation (Pu & Meng 2000) or expectation minimization (Stellmacher & Obermayer 2000). Each method has different requirements and performance in terms of computation speed, memory and reliability. Prior knowledge of the flow properties, e.g. flow direction, speed and geometry of the flow channels, can improve the accuracy and speed of the particle-matching process significantly. In experiments, a choice of a proper particle-matching method depends on the flow characteristics.
In order to obtain the velocities of the particles and, hence, the three-dimensional flow-velocity map, the displacement of each particle is divided by the time difference of two adjacent time frames (figures 12 and 13). Figure 12 represents the historically first HPIV measurement of the volumetric three-dimensional velocity distribution performed by Barnhart et al. (1994). The holograms of particles in a 25.4×25.4×60 mm3 volume in a pipe channel flow (with the mean velocity of 0.8 m s−1) were recorded using two consecutive pulses of a Nd:YAG laser and reconstructed using the phase-conjugate approach. More than 400 000 three-dimensional velocity vectors have been extracted from the measurement volume. Figure 13 presents a three-dimensional velocity map in the vicinity of a laminar jet measured with HPIV (Meng et al. 2004).
In-line digital holography is advantageous in particle imaging since, with a relatively simple setup, complete four-dimensional information can be obtained with high spatial and temporal resolution. Unlike conventional PIV (Adrian 2005), which relies on the weak scattered signal from the particle, in HPIV, the entire illumination beam is captured by the camera. This allows for the use of low-power lasers and thus results in both less intrusive diagnostics and significant cost reduction of the experiments. In the next section, we introduce a new multi-exposure technique that enables a further increase in the data-acquisition rate for particle imaging.
(c) Multi-exposure digital holography
In digital holography, the measurement speed is limited mainly by three parameters, specifically, camera frame rate, numerical reconstruction speed and computation time for identifying and matching particles. Reconstruction and computation speed can be significantly improved by using multiple processors or a specialized graphics processing unit (Shimobaba et al. 2008). In order to acquire data at speeds beyond the camera frame rate limit, multi-exposure digital holographic imaging (MEDHI) was developed (Dominguez-Caballero et al. 2007). In MEDHI, the reference wave is modulated as short laser pulses, where the modulating speed is faster than the frame rate of a camera. MEDHI takes multiple snapshots and encodes them into a single hologram. During the integration time, multiple laser pulses illuminate particles in the volume of interest, and multiple holograms are incoherently added and recorded in a single frame. In reconstruction, the trajectory of a particle during the integration time is obtained from a single hologram (figure 14). MEDHI is particularly useful in high-speed HPIV, although the allowable particle density in the flow is somewhat limited, and sophisticated particle matching algorithms are required.
As a concluding remark, we note that digital holography is still an interferometric measurement method. This means that phase variation of the wavefront, e.g. refractive-index change of flow, can, in principle, be measured directly and visualized by digital holography without using any particles by employing optical tomographic techniques (Wolf 1969, 1970; Berry & Gibbs 1970) to reconstruct the three-dimensional structure of the flow with sufficient contrast. This interferometric measurement approach can be potentially useful for studies of interfacial dynamics or capillary turbulence (surface water waves).
(d) Multiplexed volume holographic imaging
Volume holograms exhibit Bragg selectivity that forms the basis of operation of holographic memories. Recently, a scheme of VHI was proposed, in which Bragg selectivity is instead used for imaging (Sinha et al. 2004). In VHI, the point-spread function of the imaging system is engineered such that multi-dimensional information can be extracted efficiently. VHI has been demonstrated in different imaging modes and platforms such as a telescope (Sinha & Barbastathis 2004), microscope (Barbastathis et al. 1999; Luo et al. 2008), profilometer (Sinha & Barbastathis 2003), a hyperspectral imager (Liu et al. 2004; Sun et al. 2005) and a passive monochromatic binary depth-discrimination system (Oh & Barbastathis 2008).
Figure 15 shows the principle of VHI in a depth-selective imaging mode for a point source; for example, a PIV particle. In the recording process (figure 15a), a volume hologram is recorded by two mutually coherent plane waves. Then, the hologram is inserted at the Fourier plane of a traditional 4-f (telescopic) imaging system and behaves as a wavefront-selective device. If a point source is located at the focal plane of the objective lens (figure 15b), a spherical wave emitted from the point source becomes identical to the wavefront that had been used during the recording process. Strong Bragg diffraction then appears because the phase of an incident wavefront, as it propagates, perfectly matches the phase of the stored wavefront. If a point source is defocused (figure 15c), then the light incident on the volume hologram is not a plane wave; Bragg diffraction is dramatically attenuated. This particular configuration of VHI converts the defocus of a point source into intensity contrast (Sinha et al. 2004).
Owing to the Bragg selectivity properties of the volumetric holograms, multiple holograms can be recorded in the same location of the holographic material and be retrieved independently. Figure 16 shows the procedure of multiplexing two holograms in a VHI process. At the first exposure, a volumetric hologram is recorded in a standard geometry, as shown in figure 15a. At the second exposure, the point source is shifted longitudinally, and the signal wave is rotated to avoid overlap of images at the detector. In the imaging mode, the two fields of view located at two distinct depth planes are Bragg-matched with the multiplexed volume holograms, and two strong diffraction signals are produced and are laterally arranged on the image plane. Since all multiplexed holograms are optically reconstructed, the multiplexed VHI provides longitudinally spaced multiple fields of view at different depths. Since the diffraction efficiency is inversely proportional to the square of the number of multiplexed holograms (e.g. Coufal et al. 2000), careful tuning of the recording parameters is required to ensure that all the multiplexed holograms have sufficient diffraction efficiencies.
Figure 17 shows images of fluorescent particles in a water tank captured by multiplexed VHI presented in figure 16, and three depth slices are imaged simultaneously (Liu et al. 2004). Recently, up to five multiplexed volume holograms have been used in a microscopy system (Luo et al. 2008).
Image stacking of multiple images reconstructs three-dimensional information on particle positions, while no numerical reconstruction is required. Once the three-dimensional data from multiple slices are recovered, the particle identification and matching can be implemented. Optical systems of a traditional two-dimensional PIV system can be used in conjunction with the multiplexed volume hologram. Using the VHI, a quasi-three-dimensional PIV, i.e. PIV with a multitude of conventional two-dimensional measurement planes, can be implemented.
5. Fluid-dynamics experiments employing new technologies and diagnostics
Quantification of realistic turbulent processes requires non-intrusive diagnostics of fast events with ultra-high performance in space–time resolution, bandwidth and data-acquisition rate, and calls for elaboration of integrated experimental systems that incorporate the state-of-the-art in optical imaging, digital signal processing, image processing, precision mechanics, motion control and material science. Employing experimental and metrological capabilities of the holographic technology, one can potentially implement and accurately measure high-Reynolds-number turbulent flows in a relatively small laboratory form-factor and provide data suitable for a direct comparison with large-scale numerical simulations. Appreciating that the future is subject to change, we briefly discuss here some opportunities for improvement of precision, accuracy, dynamic range, reproducibility and information capacity of the experimental dataset for turbulence in rotating fluids and for Rayleigh–Taylor (RT) turbulent mixing.
(a) Turbulence in rotating flows
The understanding of turbulence in rotating systems is of fundamental importance in many instances in engineering (e.g. turbo-machinery), geophysics (e.g. ocean circulation and hurricanes) and astrophysics (e.g. star formation) (Lathrop et al. 1992; Frisch 1995; Baroud et al. 2002). The problems are often unsteady and anisotropic, and, in relation to what we know about homogeneous and isotropic turbulence, call for new techniques of data analysis. While many studies of rotating flows exist, there are relatively few that examine fundamental issues. For instance, the conventional wisdom is that the only effect of rotation on decaying grid turbulence is to slow down its decay rate while preserving self-similarity. In reality, in any confined system that is created in a laboratory, there exist inertial wave modes of the container, in addition to the turbulent motion. These modes are relatively well defined and discrete. The extent to which the modes interact with turbulence is presently unclear. Understanding this interaction is a basic problem in any laboratory study, and characterizing its spatial structure is an arduous but interesting task.
The two dimensionless parameters characterizing rotating flows are the Reynolds number and the Rossby number. The Reynolds number is Re=ΩR2/ν, where Ω is the rotation rate, R is the radius of the chamber and ν is the fluid kinematic viscosity, and it is about 107–108 in realistic turbulent flows. The Rossby number U/ΩL is inversely proportional to the rate of rotation, and it appears to be between 0.01 and 0.1 for typical large-scale geophysical flows (Pedloski 1987). Most experiments do not achieve such small Rossby numbers (with the exception of the liquid helium experiments of Bewley et al. (2007)) because high rotation rates are hard to implement owing to limitations of the technology employed currently. Furthermore, the nature of data obtained is relatively primitive.
Utilizing the experimental capabilities of holographic technology, digital motion control and precision mechanics, one can achieve conditions of high Reynolds numbers (up to 107 for water), high rotation rate (up to 10 000 r.p.m. i.e. approx. 103 rad s−1), low Rossby number and prescribed time-varying accelerations in a laboratory-scale platform for an apparatus with a chamber radius of only 20 cm (Orlov & Abarzhi 2007). Even higher Reynolds numbers can be envisaged if fluids of low kinematic viscosity are employed. The experimental and flow-diagnostics parameters derived from the parameters of the holographic platform described in §3 (Orlov et al. 2004) are summarized in table 1.
The HPIV based on the in-line digital holography or volume HPIV based on VHI can be employed for complete three-dimensional flow visualization. Small-size (diameter from 0.1 to a few microns) neutrally buoyant particles can be added to the flow, and the chamber can be illuminated with laser pulses at specific times and imaged onto the high-speed digital camera using a high-resolution high-NA optical system. High spatial (approx. 10 μm over 10 cm area) and temporal (less than 5 ns) resolution of the system may allow for mapping the flow-velocity fields with extreme accuracy and high data rate (more than 1000 flow images per second), thus providing statistical evaluation of the turbulent-flow quantities.
(b) Rayleigh–Taylor instability and turbulent mixing
Turbulent mixing induced by the RT instability plays a key role in a wide variety of natural phenomena, ranging from astrophysical to micro-scales, and including inertial confinement fusion, flows in atmosphere and ocean, explosion of supernovae, Universe formation and in industrial applications in aeronautics and optical telecommunications (Rayleigh 1882; Davies & Taylor 1950; Abarzhi 2008). RT mixing can be considered a sample case of a non-equilibrium turbulent process, whose fundamental scaling and spectral and statistical properties depart substantially from the classical scenario. For instance, it is unclear whether the concepts of non-dissipative energy cascade, the existence of inertial interval and k−5/3 velocity spectrum (that are compatible with the time and scale invariance of the rate of energy dissipation in isotropic homogeneous turbulence, Kolmogorov 1941a,b) are applicable to an accelerating RT flow (where the rate of energy dissipation is statistically unsteady; e.g. Abarzhi et al. 2005, 2007). A trustworthy guidance is required from the experiments to quantify the transports of mass, momentum and energy in the non-equilibrium turbulent process and to extend our knowledge of turbulence flows well beyond idealized isotropic consideration.
RT flows are rather difficult to implement and study in a laboratory environment, and progress achieved over recent decades in the experimental studies of RT instability was significant (e.g. Read 1984; Dalziel et al. 1999; Dimonte & Schneider 2000; Waddell et al. 2001; Kucherenko et al. 2003; Ramaprabhu & Andrews 2003; Dimonte et al. 2004; Meshkov 2006). To quantify scaling, structure and statistics of RT flows, one requires a diagnostics ability to measure with high accuracy and precision, both in space and in time, the multiple and time-dependent scales in a single experimental run and to augment it with high reproducibility of the acceleration history and accurate control of the initial and boundary conditions and other macroscopic parameters (Abarzhi 2008).
The experimental capabilities in precision mechanics, motion control, high-resolution imaging, digital holography and data-analysis techniques can be integrated into a new experimental platform for studying RT instabilities and turbulent mixing (table 2). HPIV based on digital in-line holography, combined with PLIF, can allow for direct observations and mapping of the three-dimensional velocity and density fields with a high data-acquisition rate (more than 1000 fps) over a large spatial area with high spatial (approx. 10 μm) and temporal (better than 5 ns) resolution. High NA imaging optics may yield high light-collection efficiency and, hence, SNR at moderate power lasers (approx. 1 W at 1 kHz), thus providing non-invasive and accurate diagnostics of the flow parameters.
6. Summary and conclusions
The state-of-the-art techniques in motion control, electronics, digital signal processing and optical imaging may allow for realization of turbulent flows with very high Reynolds number (107 and higher) in relatively small laboratory-scale experiments, and the quantification of their properties with high spatio-temporal resolution and bandwidth. These experimental and metrological capabilities could be used for simultaneous, three-dimensional, diagnostics of spatial and temporal properties concerning the transport of momentum, angular momentum and energy, as well as the identification of scaling, invariants and statistical properties of the complex unsteady turbulent flows. The new technologies can be used to investigate a large variety of hydrodynamic problems of non-Kolmogorov turbulence, accelerating and rotating flows, turbulent boundary layers, RT instability and turbulent mixing.
The development of the holographic storage system at Stanford was supported by the Defense Advanced Research Projects Agency (Dr L. N. Durvasula) and the International Storage Industry Consortium (INSIC) within the Holographic Data Storage Systems (HDSS) and Photorefractive Information Storage Materials (PRISM) university–industry consortia. S.S.O. gratefully acknowledges technical contributions of and valuable discussion with members of the HDSS and PRISM consortia, including R. T. Ingwall and D. A. Waldman of Aprilis, Inc., B. V. K. V. Kumar and V. Vadde of Carnegie-Mellon University, M. O’Callaghan of Displaytech, Inc. (Boulder, CO, USA), G. W. Burr, H. J. Coufal, C. M. Jefferson, B. Marcus, R. M. Shelby and G. T. Sincerbox of IBM Almaden (San Jose, CA, USA), J. Sanford of IBM Yorktown, B. H. Schechtman of INSIC, D. G. Koch and K. P. Thompson of Optical Research Associates (Pasadena, CA, USA), J. Hong and J. Ma of Rockwell International (Thousand Oaks, CA, USA), A. Daiber, M. McDonald and R. Okas of Siros Technologies (San Jose, CA, USA), W. Phillips, E. Bjornson, F. I. Dimov and X. Li (formerly of Stanford University), R. K. Kostuk and M. Neifeld of University of Arizona and, in particular, L. Hesselink of Stanford University who served as a coprincipal investigator of both DARPA/INSIC consortia. At MIT, the work on the digital holography and on the three-dimensional imaging was supported by the Defense Advanced Research Projects Agency (USA), the National Institutes of Health (USA) and the National Research Foundation (NRF) through the Center for Environmental Sensing and Modeling (CENSAM) of the Singapore–MIT Alliance for Research and Technology (SMART) Centre. S.B.O. and G.B. gratefully acknowledge José A. Domínguez-Caballero, Nick Loomis, Laura A. Waller, Lei Tian, Jerome H. Milgram, Cabell S. Davis, Jason Dahl and Michael S. Triantafyllou for valuable discussions and suggestions. At the University of Chicago, this work has been partially supported by the US Department of Energy and the US Department of Defense. S.I.A. expresses her gratitude to L. P. Kadanoff and R. Rosner for discussions.
↵1 In a PIV system, the effective NA can be estimated as D/2L, where D is the diameter of the lens entrance pupil and L is the distance from the light sheet to the camera. For the parameters of a typical PIV setup employed in, for example, a wind tunnel: D∼5 cm and L∼100 cm, the NA would be approximately 0.025.
One contribution of 13 to a Theme Issue ‘Turbulent mixing and beyond’.
- © 2010 The Royal Society