A review of a procedure for the simulation of time-dependent, inviscid and turbulent viscous, compressible flows involving geometries that change in time is presented. The adopted discretization technique employs unstructured meshes and both explicit and implicit time-stepping schemes. A dual time-stepping procedure and an ALE formulation enable flows involving moving boundary components to be included. Techniques that have been developed to maintain the validity of the unstructured mesh and to allow for the capture of moving flow features are also reviewed. Using the in-house developed techniques, some examples are included to demonstrate the use of the approach for the simulation of a number of flows of practical industrial interest.
There have been significant developments in unstructured mesh-based procedures for the solution of problems involving high-speed compressible flows. The principal advantages of this approach are well known and centre, mainly, on the observation that it provides a powerful tool for the discretization of domains of complex shape. An additional feature is that adaptive mesh procedures can be readily implemented, allowing the quality of the solution to be enhanced. Solutions of the compressible Euler equations are normally smooth over large areas of the computational domain, but often exhibit steep gradients in localized portions of the flow. In transient simulations, these localized regions will generally move through the computational domain and may sweep across significant areas, e.g. flows involving propagating and interacting shock waves. In this case, a globally fine mesh will be necessary to provide the required resolution, unless some form of mesh adaptivity is used. It follows that the use of adaptivity, with the possibility for local mesh refinement and coarsening, offers the potential for considerable computational savings when transient flows are simulated. These savings will only be realized, however, if the adaptivity is accomplished in a computationally efficient manner, as many mesh adaptation steps may be required.
Problems of industrial and academic relevance frequently involve flows around geometries that change with time. In both the military and civilian aircraft industries, the simulation of flows involving multiple bodies in relative motion is of great interest, e.g. the release of stores from aircraft or formation flying, such as air refuelling. Manoeuvring aircraft not only experience geometry changes through the deployment of control surfaces but also through aeroelastic effects, which may change the aerodynamic characteristics significantly. Such problems may involve motion that is essentially known a priori or motion that is directly coupled with the flow field and dynamics of the problem. Problems may be well approximated by simulating the geometry changes as the movement of two or more rigid bodies, while others will require significant deformation of components. Such transient flows provide additional computational difficulties, as the mesh must be modified during the computation process, to accommodate the changes in the geometry.
Although the literature on this subject describes a number of different approaches for handling problems of this type (Dougherty et al. 1985; Löhner 1987; Bunning et al. 1988; Peraire et al. 1988; Batina 1990; Löhner & Baum 1990; Morgan et al. 1991a,b; Venkatakrishnan & Mavriplis 1996; Koobus & Farhat 1999; Hassan et al. 2000), two approaches are particularly favoured. In the chimera method (Steger et al. 1983; Nakahashi et al. 2000), each separate body is provided with a dedicated mesh. The solution on these meshes are then numerically combined during the calculation and both structured (Rogers et al. 2003) and unstructured (Löhner et al. 2001; Nakahashi et al. 2003) grids have been employed. For practical use, the approach normally requires fully automatic hole cutting procedures, with the meshes combined automatically following each geometry change. The approach is robust and fast, though experience is often needed to ensure that adequate numerical resolution is maintained throughout the entire simulation, and it works well for problems involving large relative motion of components. Small component deformation may be accomplished by coupling with a mesh deformation approach. However, for problems involving large deformation of components, which cannot be broken down into smaller entities in relative motion, the method is generally only applicable in combination with other approaches. The other approach, which is the subject of this paper, involves the use of a single, consistent, unstructured mesh for each time-step. With time, this mesh will need to be adapted, to allow for both the changing geometry and the accurate tracking of high-gradient features in the solution. The paper reviews the technologies used in the second approach and in particular the innovative in-house development carried out over the past two decades for the simulation of unsteady flow with moving boundary components.
2. Problem formulation
With the development of an arbitrary Lagrangian–Eulerian (ALE) solution procedure (Donéa 1983) in mind, the time dependent, compressible Navier–Stokes equations are considered in the integral form(2.1)over a time-dependent three-dimensional domain, Ω, with surface ∂Ω. Here, nj is the component of the outward unit normal vector to ∂Ω in the direction xj of a Cartesian coordinate system, while the unknown vector U and the inviscid and viscous fluxes, Fj and Gj are given by(2.2)In these expressions, ρ denotes the fluid density; ui, the component of the velocity vector in direction xi; p the pressure; and ϵ the specific total energy. In addition, τij are the components of the deviatoric stress tensor; qj is the heat flux; and vj is the velocity of the control volume boundary. The system is closed by assuming the gas to be calorically perfect. For turbulent calculations, the Spalart–Allmaras one equation model (Spalart et al. 1992) is also employed.
The above method of formulating the problem is not unique and many schemes use the differential form of the Navier–Stokes equations as the starting point for their discretization procedures (Löhner 1987; Peraire et al. 1987).
3. Solution procedure
(a) Hybrid unstructured mesh generation
For viscous problems, the computational domain will be represented by an unstructured hybrid mesh. The process of mesh generation begins with the discretization of the domain boundary into a set of triangular or quadrilateral meshes that satisfy a user-specified mesh control function (Peiró et al. 1994). A background mesh as well as point, line and planar sources can be used to define the control function that specifies the distribution of the mesh parameters (Peraire et al. 1990), such as spacing, stretching and the direction of stretching. For viscous flows, the boundary layers are generated using the advancing layer method (Hassan et al. 1996) and the remainder of the computational domain is filled with tetrahedral elements using a Delaunay incremental Bowyer–Watson point insertion, with automatic point generation (Weatherill & Hassan 1994). Hybrid unstructured meshes are constructed by merging certain elements of this tetrahedral mesh in the boundary layers.
(b) Spatial discretization
The discretization of the governing equations can be achieved in a number of different ways. A popular and computationally efficient approach is to employ a cell vertex finite volume method. This requires the identification of a dual mesh, with the medial dual being constructed as an assembly of triangular facet, , where each facet is formed by connecting edge midpoints, element centroids and face centroids in the basic mesh, in such a way that only one node is contained within each dual mesh cell. The dual mesh cells then form the control volumes for the finite volume process. When hybrid meshes are employed, the method for constructing the median dual has to be modified to ensure that no node lies outside its corresponding control volume. To perform the numerical integration of the fluxes, a set of coefficients is calculated for each edge using the dual mesh segment associated with the edge. The values of the internal edge coefficients, CjIJ, and the boundary edge coefficients, DjIJ, are given by the formula(3.1)where is the area of facet ; is the outward unit normal vector of the facet; is the set of dual mesh faces on the computational boundary touching the edge between nodes I and J; and denotes the normal of the facet in the outward direction of the computational domain. The numerical integration of the fluxes over the dual mesh segment associated with an edge is performed by assuming the flux to be constant and equal to its approximated value at the midpoint of the edge, i.e. a form of midpoint quadrature. The calculation of a surface integral for the inviscid flux over the control volume surface for node I is then given by the formula(3.2)where ΛI denotes the set of nodes connected to node I by an edge; and denotes the set of nodes connected to node I by an edge on the computational boundary. The last term, thus, appears only if I is a boundary node. A similar formula can be applied for the viscous fluxes. The use of edge-based data structure has become widely used due to its efficiency, in term of memory and CPU requirements, compared with the traditional element-based data structure, particularly in three dimensions.
The resulting discretizations are essentially the central difference in character and the addition of a stabilizing dissipation is required for practical flow simulations. This is achieved by replacing the physical flux function by a consistent numerical flux function, such as either a Lax–Wendroff flux function (Morgan et al. 1991a,b) or a JST flux function (Jameson et al. 1981). Discontinuity capturing may be achieved by the use of an additional harmonic term in regions of high-pressure gradients, identified using a pressure switch, or by the application of high resolution concepts, such as FCT (Boris & Book 1973; Zalesak 1979) or LED (Jameson 1995).
At node I of the mesh, the resulting semi-discrete equation system can then be expressed in the form(3.3)
Alternative spatial discretization, achieved by employing a Galerkin finite element approximation (Morgan et al. 1991a,b; Luo et al. 1993), the streamline upwind Petrov–Galerkin method (Brooks & Hughes 1982) or the Galerkin least squares method (Hughes et al. 1989), has also been successfully used for the solution of these equations.
(c) Time discretization
(i) Explicit schemes
Equation (3.3) can be integrated in time using a standard multistage explicit time-stepping procedure that is of sufficient accuracy to ensure that the geometric conservation law is satisfied. Explicit schemes, deemed to be too slow for obtaining steady-state solutions, may be the schemes of choice for certain unsteady applications, when the time-scales of interest are small or, more precisely, when they are comparable to the spatial scales. The grids should be clustered only in regions of interest; otherwise the size of the explicit time-step can become unnecessarily small. To maintain time accuracy, the solution at every grid point must be advanced using the smallest allowable time-step. The constraint of using a single time-step throughout the computational domain can be removed by using a time-accurate residual averaging formulation (Lerat et al. 1985; Venkatakrishnan & Jameson 1988). Temporal adaptation techniques (Kleb et al. 1992), based on domain decomposition, enable different cells to be advanced with different numbers of local time-steps to reach a particular time level and two such methods are reviewed here.
(ii) Overlapping domain decomposition
This algorithm was introduced in the context of steady flow simulations on unstructured triangular meshes (Löhner et al. 1984). Its extension will be outlined for use with an algorithm employing a Lax–Wendroff flux function and forward Euler time-stepping (Morgan et al. 1991a,b; Hassan et al. 2000). Firstly, the allowable time-step ΔtI for each node I is computed. The allowable time-step Δte for each edge e is the minimum of the time-steps associated with the two nodes of edge e. Using the minimum time-step Δtmin allowed on the mesh edges, the edges are sorted into regions such that region consists of all edges e for which(3.4)Regrouping may be necessary if the edges surrounding any node have a group index difference greater than one. The boundary nodes of each region are determined and used to add two layers of edges from region to each region . A global time-step of will result in a mesh with regions. For maximum efficiency, the number of regions to be employed should be recomputed each time a new decomposition is performed. The number of regions used at any stage is determined by a rough estimate of the operations involved in the time-stepping process. This argument is, of course, based purely on computational effort and does not consider any accuracy benefits that might result from advancing each edge near its optimum time-step.
(iii) Non-overlapping domain decomposition
The use of the second algorithm is described in conjunction with the JST flux function and multistage explicit time-stepping of equation (3.3). According to a linear stability analysis, the allowable Courant number for this multistage scheme increases with the number of intermediate stages used. This suggests the possibility of selecting a Courant number at the outset and then using a large number of stages to advance the solution at nodes allowing a small time-step size, while nodes allowing a larger time-step could be advanced with a lower number of intermediate stages. In this case, each residual evaluation requires that the solution at neighbouring nodes be available at the same time level. However, for nonlinear equations, multistage schemes are most efficient when a moderate number of intermediate stages are used, e.g. between three and five. This suggests that a better approach is to employ a sequence of complete multistage cycles locally to advance the solution. To ensure accuracy in time, the sequences and intermediate stages have to be synchronized. Synchronization and second-order accuracy in time can be guaranteed by appropriate choice of coefficients (Vilsmeier & Hanel 1995; Hassan et al. 2001).
The method starts by computing the allowable minimum edge time-step Δtmin and grouping the edges into regions, as described above. The solution is advanced one global time-step using the following procedure: (i) evaluate the residual for all domains, using Un at time t=tn, (ii) loop over the number of steps required to perform one global time-step , (iii) loop over the number of stages (i=1, ns), (iv) determine the number of domains k that can be advanced at this stage and using the previous residual evaluations for all domains up to k+1, update the solution to the current time level [(j−1)+1/2ns−i]Δtmin before evaluating the new residual for domain k, and (v) at the end of the stage loop, store the solution for all domains which satisfy mod (j, 2k−1)=0; it can be noted that, with this method, time accuracy is maintained without overlapping the domains.
To demonstrate the efficiency improvements that can be achieved using these schemes, the simulation of unsteady flow over an oscillating B60 aircraft configuration, consisting of wings, fuselage, pylons and powered engines, at a Mach number of 0.801 and an angle of attack of 2.78 degrees can be considered (Hassan et al. 2000). The wing, pylon and engines oscillate sinusoidally with a motion that is described in terms of the pitch and heave at specified sections, with the value of these parameters at intermediate points obtained by linear interpolation. The initial mesh has 745 198 elements and 135 760 points. In this case, the non-overlapping domain decomposition method achieves a speed up of 6.27 with four regions, while the overlapping method achieves a speed up of 4.37. This difference is believed to be due to the fact that a significant amount of domain overlapping is necessary to deal with the element size distribution that is produced by the requirement of accurate flow modelling for such a complex configuration.
(iv) Implicit schemes and geometric conservation law
When dealing with many low-frequency phenomena, such as flutter, explicit schemes often lead to large computing times. Therefore, it is desirable to develop a fully implicit method in which the time-step is dictated by the flow physics and is not limited by the cell sizes. At a general node I, the time derivative term in equation (3.3) may be approximated, using a first-order scheme, as(3.5)or with the three level second-order approximation(3.6)In these expressions, the superscript n refers to an evaluation at time level t=tn; the time levels are taken to be equally spaced with a time-step Δt; and VI is the volume of ΩI.
An important property of any numerical simulation technique for transient problems on moving meshes is that of geometric conservation (Thomas & Lombard 1979; Koobus & Farhat 1999). This property requires that, if the unknown field is constant, the numerical solution should not change in time. Mathematically, this implies that(3.7)For a numerical scheme, the integral in equation (2.2) involving the velocity of the control volume interface is performed in the usual way of summing the edge contribution. In this case, additional numerical edge coefficients SIJ and TIJ are introduced such that(3.8)The task of determining the edge coefficients in equation (3.8) so that the numerical scheme is geometrically conservative is not trivial and has been treated by several authors (Thomas & Lombard 1979; Nkonga & Guillard 1994; Koobus & Farhat 1999). The approach of Nkonga & Guillard can be adopted and extended to dual meshes constructed with assemblies of triangular facets. The key point is the assumption that each node in the mesh moves in a linear fashion between time levels tn and tn+1. Under this assumption, the movement of a triangular facet can be easily described and it can be shown (Nkonga & Guillard 1994; Sørensen et al. 2003) that the formulation of a geometric conservative scheme for a hybrid mesh is achieved by setting(3.9)where is an averaged facet velocity; is the averaged facet area; and is the average normal between time levels n and n+1; and ΓIJ and are the set of dual mesh faces, and the set of dual mesh faces on the computational boundary, respectively, touching the edge between nodes I and J. For the second-order time discretization, a modified version of the ALE flux that involves the two previous time levels has to be employed to ensure numerical geometric conservation (Venkatakrishnan & Mavriplis 1996; Sørensen et al. 2003).
When an implicit scheme is used to compute unsteady flows, one has to drive the unsteady residual to zero, or at least to truncation error, at each time-step. In the context of factored-implicit schemes, this is usually done by employing inner iterations (Brennis & Eberle 1990; Rogers et al. 1992; Pulliam 1993). It is the role of these inner iterations to eliminate errors, if any, due to factorization and linearization, and sometimes also errors arising from employing a lower order approximation on the implicit side. The number of inner iterations required may be large depending on the flow situation, the size of the time-step employed and the extent of mismatch of the explicit and implicit operators. Jameson (1991) has advocated the use of a full approximation storage multigrid (FAS) procedure as a driver for a fully implicit scheme when using structured grids. The significant advantage of the approach when multigrid is used to solve the nonlinear problem is that it incurs no storage overheads plaguing traditional implicit schemes based on linearization. The method is, therefore, particularly attractive for unstructured grid computations in three dimensions and allows the time-step to be determined solely based on flow physics. Vassberg (1992) has applied this technique to compute flow solutions over oscillating airfoils using unstructured grids, where a sequence of triangulations was generated by removing points from the fine grid triangulation. Multigrid techniques have been successfully extended to deal with unstructured grids by generating a sequence of non-nested grids and using piecewise linear transfer operators (Mavriplis 1992; Peraire et al. 1992). An important development in the use of multigrid techniques for unstructured grids is the agglomeration multigrid algorithm (Smith 1990; Steve et al. 1992; Mavriplis & Venkatakrishnan 1994; Sørensen 2002) for the two- and three-dimensional Euler equations. This technique has since been extended to deal with viscous flows (Mavriplis & Venkatakrishnan 1996; Sørensen et al. 2003). The main advantage of the agglomeration multigrid algorithm is that it only requires the fine mesh to be generated; the coarse meshes are automatically generated by fusing fine mesh control volumes using an efficient graph-based algorithm, resulting in a fully nested sequence of coarse mesh levels. The method may be based on a FAS multigrid procedure (Brandt 1977) to produce the solution in an efficient manner. This consists of solving the equation system using a cascade of successively coarser meshes. The fine mesh equation is first relaxed by a three-stage Runge–Kutta explicit procedure, yielding an approximation of the solution on the fine mesh level. On the next coarser level, the equation is formed with the source term defined using a restriction operator, which can be based on the volume-weighted average, for mapping from the fine to the coarse mesh level. The new equation is now relaxed, either by a Runge–Kutta time-step or by a multigrid step involving another coarse mesh level. In this way, the equation is solved using several coarse mesh levels, each mesh level is responsible for eliminating errors that appear locally as high frequency. The result is a coarse mesh approximation to the initial equation. The correction from the coarse mesh level on the fine mesh is achieved by the use of an appropriate prolongation operator such as the simple point injection scheme. This procedure is iterated until convergence.
Other implicit schemes have also been successfully implemented (Tezduyar et al. 1992; Koobus & Farhat 1999; Hassan et al. 2000). A technique based on a matrix-free method was developed by Löhner (Luo et al. 1994) to solve the unsteady three-dimensional compressible Euler and Navier–Stokes equations on unstructured meshes. An approximate system of linear equations arising from the Newton linearization is solved by the GMRES (generalized minimum residual) algorithm with a LU-SGS (lower-upper symmetric Gauss Seidel) preconditioner. An overall speedup factor from eight to more than one order of magnitude for various test cases in comparison with the explicit method was obtained. The GMRES technique is also employed to solve the equation systems that arise from the application of the deformable spatial domain/stabilized space time (DSD/SST) procedure (Tezduyar et al. 1992).
4. Feature-based adaptivity
In compressible flow problems, small regions of rapid change in the solution are frequently embedded in large regions of smooth solution. To simulate the interaction of these discontinuities correctly, in particular for unsteady flow problems where the phenomena evolve through the whole computational domain, an appropriately fine mesh is required. The use of adaptive refinement techniques becomes imperative to avoid the need for an overall fine mesh and, hence, reduce the cost of these simulations. The success of this process will depend on the ability to define a suitable error indicator, which must be capable of determining an improved distribution of the mesh parameters for use by the mesh generator.
(a) Error indicator
Error indicators based upon the use of interpolation theory can provide an indication of the accuracy of the computed solution. A single key variable, or a combination of variables, can be subjected to the error indication process. As well as detecting discontinuities in the solution, the procedure should also provide information about any directionality which may be present. The advantages of using directional error indicators become apparent when the nature of the solutions to be computed, involving flows with shocks and contact discontinuities, is considered. Such features can be most economically represented on meshes that are stretched in appropriate directions. The basic assumption is that the computed solution is exact at the nodes and deviates, over each cell, between the computed solution and a higher-order interpolation through the nodal values. A multidimensional extension of a one-dimensional technique, based upon the use of the root mean square value of the indicated error over each cell, leads to the Hessian metric (Peraire et al. 1987, 1990; Borouchaki et al. 1996). This requires the computation of the second derivative of the computed solution on the mesh and this may be effectively accomplished by a variational recovery process (Zienkiewicz & Morgan 1983).
The idea is to use the computed solution to provide information on the required spatial distribution of the mesh parameters (Morgan et al. 1986; Peraire et al. 1987). This information will be used by the mesh generator to generate a new mesh, which is adapted in those areas where the values of the optimal mesh parameters differ from the values of the current mesh parameters by a user-prescribed amount. The optimal values for the mesh parameters are calculated at each node of the current mesh. The directions are taken to be the principal directions of the Hessian matrix and the corresponding mesh spacings are computed from the eigenvalues of this matrix, by imposing the requirement of equidistribution of the indicated error. In the practical implementation, two threshold values for the computed spatial distribution of spacing are used: a minimum and a maximum allowed value. A maximum value is required to account for the possibility of a vanishing eigenvalue in regions where the flow is almost uniform. Maximum values of the second derivatives occur near discontinuities in the flow, where the error indicator will demand that smaller elements are required. A minimum value for the mesh size is required to avoid an excessive concentration of elements near discontinuities and is set to ensure that discontinuities are represented to a desired accuracy. Practical experience with this type of error indicator for the solution of transient flows shows that it works well for problems involving a single strong feature, as the mesh spacing is scaled according to the maximum value of the second derivative. However, weaker features in the flow may not be adequately resolved. To overcome this problem, an alternative indicator that combines first and second differences of the key variable in a manner suggested by the pressure switch can be employed (Löhner 1989; Morgan et al. 1991a,b). This indicator is dimensionless and has the property that discontinuities are given equal weighting, regardless of their strength.
The computed error, estimated from the current solution, is transformed into a spatial distribution of ‘optimal’ mesh spacings, which are interpolated using the current mesh. The current mesh is then modified with the objective of meeting the optimal distribution of mesh characteristics as closely as possible. Several alternative procedures exist for modifying an existing mesh in such a way that the new requirement is more closely satisfied (Löhner 1987; Löhner & Baum 1990). One approach is to use mesh enrichment, in which the nodes of the current mesh are kept but some new nodes/cells are created. Adaptive remeshing, in which the adaption is accomplished by locally regenerating a new mesh that satisfies the new requirements, may also be used.
(b) Mesh enrichment
This technique works well for strongly unsteady flows that do not involve moving geometries (Löhner 1987). It does not lead to any loss of information, maintains conservation exactly and is fast. To adapt a mesh using mesh enrichment, a sweep over all the sides in the mesh is made and the optimal spacing in the direction of each side is computed. The enrichment procedure consists of introducing an additional node for each side for which the calculated spacing is less than the length of the side. For interior sides, this additional node is placed at the midpoint of the side; whereas for boundary sides, it is necessary to refer to the boundary definition to ensure that the new node is placed on the true boundary. When any side is subdivided in this manner, the cells associated with that side will also need to be subdivided in order to preserve the consistency of the final mesh. When the mesh enrichment procedure has been completed, the values of the unknowns at the new nodes are linearly interpolated from the original mesh and the solution algorithm is restarted. This procedure has been successfully implemented in both two and three dimensions and several impressive demonstrations of the power of the technique have been made (Löhner & Baum 1990). The speed of the algorithm is achieved by keeping the shape of the original cells intact. This precludes both a directional refinement and the ability to move objects or interfaces in the domain of interest. Directional refinement is of importance for problems that exhibit features that only need refinement in one direction and considerable savings are possible if the mesh can be adapted in this manner. An initial attempt to refine directionally while maintaining the position of the original points proved too complicated for practical application (Löhner & Morgan 1986).
(c) Adaptive remeshing
To overcome the limitations of the mesh enrichment method, adaptive remeshing techniques may be used (Löhner 1990; Morgan et al. 1991a,b; Frey & Alauzet 2003). This process starts from the computed solution on the current unstructured mesh and nodes which are to be removed, identified by the error indicating process, with all elements connected to these nodes being marked. The marked elements are removed from the mesh and stored to act as a background grid for interpolation purposes. To ensure that each hole in the mesh is bordered by a collection of sides in two dimensions and faces in three dimensions, a new mesh is generated, if necessary, along sections of the computational boundaries according to the required distribution of the mesh parameters. The collection of sides in two dimensions and faces in three dimensions surrounding each hole is now regarded as an initial generation front for the triangulation algorithm. The volume mesh generator fills the holes by constructing new elements according to the required distribution of mesh parameters provided by the error indicating process. The triangulation is performed at a faster rate if each hole is considered separately. When a new point is generated, a searching process is undertaken to identify the cell in the original mesh that contains the new point. The mesh parameters are then linearly interpolated from the nodal values for that cell. An alternative digital tree (ADT) (Bonet & Peraire 1990) could be used to aid this searching process. However, as each hole contains relatively few elements, a simpler process has proved to be more efficient in identifying the relevant cell (Hassan et al. 2000). This involves forming an array that contains three cells of the original mesh that have a side in common with the cell under consideration in two dimensions and four cells that have a face in common with the cell in three dimensions. Then, given a point P in the domain and a starting cell, calculate the area coordinates of P with respect to the cell. If all have a value between zero and one then the cell contains the point P. If not, find the node for which the area coordinate is a minimum and use it to determine the next cell to be checked.
When the grid is complete, a mesh enhancement procedure consisting of a combination of nodal smoothing and edge swapping can be performed to improve the quality of the newly generated cells (Hassan & Probert 1999). In addition, the interpolation of the values of the unknowns to the newly generated grid occurs at the same time as the interpolation of the mesh parameters. Although higher-order interpolation guarantees that conservation can be used, linear interpolation is commonly employed. It was shown that when local remeshing is used, the conservation properties were not affected by the use of linear interpolation (Morgan et al. 1991a,b).
5. Adaptivity for moving boundary problems
The ability to move objects or interfaces in the domain of interest opens the possibility of simulating a wide range of problems, including general fluid–structure or fluid–fluid interaction problems. For transient flow simulations that involve moving boundary components, an initial unstructured mesh complying with a mesh spacing that ensures adequate resolution of the initial solution is generated. The solution is advanced in time and, at each time-step, the coordinates of the points on the moving boundaries are updated according to the prescribed movement of the geometry. Without a corresponding mesh adaptivity strategy, the mesh becomes successively distorted and, eventually, invalid. However, simulations of such flows can be computationally expensive and the amount of mesh adaptation that is performed should be minimized.
(a) Mesh movement
To perform unsteady flow simulations involving moving geometries, a body-conforming mesh has to be regenerated either globally at each time-step or the existing grid must be allowed to deform. The first option is expensive, especially in three dimensions. The obvious method to account for boundary movement, while retaining the mesh connectivity, is to use mesh movement (Degand & Farhat 2002). For external flows, this is usually achieved by fixing the mesh at the far-field boundary, while moving the mesh nodes on the geometry. The interior mesh nodes are then moved accordingly to achieve the desired mesh quality. A number of different mesh movement approaches have been investigated in the literature (Batina 1990; Tezduyar et al. 1992; Lesoinne & Farhat 1996; Venkatakrishnan & Mavriplis 1996; Crumpton et al. 1997; Farhat et al. 1998; Hassan et al. 2000), with the most popular approach assuming that the edges of the mesh behave like springs connecting the nodes (Batina 1990; Venkatakrishnan & Mavriplis 1996; Farhat et al. 1998; Hassan et al. 2000; Degand & Farhat 2002). Nodes on the moving geometry are moved, either linearly or in a nonlinear manner, by decomposing the boundary motion into small increments. Interior nodes are moved to ensure internal equilibrium at each increment and this is accomplished by solving the system(5.1)where and dIJ denote the lengths of the edge between nodes I and J before and after the movement, respectively. The spring coefficient kIJ is taken to be the inverse of the edge length or it can be related to the minimum volume of the adjacent cells. Moving techniques governed by the equation of linear elasticity have also been successfully implemented (Johnson & Tezduyar 1999). In an approach proposed by Qin (Liu et al. 2006), the Delaunay graph of the boundary points is constructed and stored. The cell that contains each interior point in the domain is identified and the area shape function is stored. At the end of each time-step, the coordinates of the boundary points are updated in the Delaunay graph and the new location of the interior points is computed using the stored area shape function and the new coordinate of the cells of the Delaunay graph that contain the points. This is a very efficient approach, since all the searches have to be performed in the preprocessor. However, the method depends upon the validity of the Delaunay graph at each time-step and, for many problems involving the rotation of some of the boundary components, the graph can easily become invalid.
(b) Remeshing for moving boundaries
The real challenge for any mesh point movement strategy occurs when relative motion occurs between bodies in close proximity and when fine grids are employed. More sophisticated procedures for controlling the quality of the mesh during movement can be devised (Miller & Miller 1981), but mesh movement techniques often result in excessive skewing of grid lines and eventually the grid lines may cross, resulting in invalid triangulations. In this case, the use of some kind of reconnection procedure is unavoidable. Reconnecting the edges by the Delaunay criterion, coupled with a pre-smoothing procedure can be used to improve the quality of the distorted meshes (Venkatakrishnan & Mavriplis 1996). In two dimensions, this process is efficient since it is reduced to a local edge swapping procedure. A simplified strategy (Baum et al. 1994) involves the regeneration of a coarse mesh, either locally or globally, whenever the body movement becomes too severe. This is followed by the use of an adaptive enrichment technique to create a finer mesh. An alternative is to remesh the entire computational domain at each time-step (Tezduyar et al. 1992). However, this is not only computationally expensive but may also result in reduced accuracy, due to the effects of interpolation errors. In addition, certain desirable features such as geometric conservation are difficult to ensure if the mesh structure changes. For certain simulations, it is impossible to avoid remeshing if the necessary mesh quality is to be maintained (Baum et al. 1994; Hassan et al. 2000). However, the regions for which mesh movement is inappropriate are usually relatively small and this is used by applying local remeshing only. The regions that will be remeshed are determined by a mesh quality indicator that may be taken to be the ratio of the minimum dihedral angle of an element at the current time level and the minimum dihedral angle in the original mesh. Based on this indicator, elements are removed, creating one or more holes in the mesh. Each of these holes is meshed using an appropriate volume mesh generator scheme, with the triangulation of the hole surface providing the initial surface triangulation (Hassan et al. 2000; Tremel 2005) for each hole. In certain circumstances, it may not be possible to recover the surface triangulation of the hole and in this case another layer of elements is removed from the original mesh and the process is repeated. The values of the unknowns are transferred from the previous mesh level by linear interpolation.
It has to be noted that, depending on the required frequency of remeshing, the adaptive remeshing technique can lead to poor numerical results. In some cases, essential physical features of the flow, such as contact discontinuities or weak secondary shocks, are simply lost. This poor performance is due to the inherent numerical diffusion that occurs when interpolating data from one grid to another. To overcome this limitation, a transient fixed base mesh adaptation algorithm has been proposed (Alauzet et al. 2003), which introduces a new loop into the main solution algorithm. Starting with an initial solution at time t, a metric associated with the computed solution is defined and used to generate a new adapted mesh. Instead of continuing the computation, as in the classical case, the computation is restarted with the same initial solution at time t but with the new mesh. The iterative process is repeated until it converges towards a fixed point for t+δt. The Hessian metric in this case has to be modified to take into account the time dependency of the solution, with the new metric allowing refinement of the area through which features evolve in the time frame and not just the area where it ends.
With an unstructured mesh flow solver, and an automatic triangulation capability, it is possible to contemplate the solution of transient flows using these adaptive grid methods. The combined procedures were applied to simulate the blast wave produced by an exploding vessel (Morgan et al. 1991a,b). The problem involves the simulation of the flow field around a confined vessel whose lid is initially at rest, which then moves upwards under the influence of the initial high-pressure region inside the vessel. Various pressure transducers are located on the wall of the chamber. Initially, the air inside the vessel is at a pressure of 40 atm, while the air in the chamber is at atmospheric pressure. The temperature of the air in both regions is 20°C. The variation of lid velocity was prescribed according to experimental observations. An initial mesh that adequately represents the computational domain was generated. The solution was advanced by a prescribed number of time-steps. At each time-step, the coordinates of the points on the moving boundaries were updated and the error indicator was used to examine the solution and compute a new distribution for the mesh parameters. The elements whose size and shape did not meet the requirements of the new distribution were identified and the mesh was adapted according to the new distribution of mesh parameters. The flow variables at the new nodes were interpolated from the old grid. A selection of the sequence of meshes and Mach number contour distributions produced during the computation is presented in figure 1. The computed pressures are compared with the available experimental data (Baum 1991) in figure 2. It can be seen that good agreement is achieved using the adaptation procedure outlined above.
6. Adapting viscous meshes on moving boundary components
The above techniques require special modification to handle the highly stretched hybrid elements used to resolve boundary layers, before they can be used for the solution of the compressible Navier–Stokes equations in the presence of moving boundary components. The use of the spring analogy for the mesh movement allows good control over the mesh by varying the spring constants of the edges. This is required when the algorithm is applied on meshes that contain boundary layer regions, where mesh stretching of up to 1000 may be present (Löhner et al. 1999; Sørensen 2002; Stein et al. 2004). In this case, the moving mesh algorithm can easily produce cells with negative volume. The regeneration process will result in cells that are, in general, identical to those that would have been obtained if the original cells had been moved rigidly. To avoid the complication of regenerating the boundary cells, the spring coefficient for the edges located in the boundary layers is increased to force a rigid movement of the cells in that region and to ensure the validity of the moved cells; this increase will be applied uniformly to all cells in each layer. The application of the mesh movement algorithm on hybrid viscous meshes can be accelerated by moving the nodes in the first 50% of the boundary layer in the same prescribed manner as the boundary points (Sørensen 2002). This normally results in a 50% saving in the CPU time required for the mesh movement procedure.
The application of the remeshing procedure also has to consider the requirement of the boundary layers. Despite the fact that the scheme adopted for mesh movement prevents the requirement for the regeneration of the existing cells, in the case where the geometry contains components that are moving towards each other, some boundary layer cells will need to be removed. In addition, if the original configuration contains small gaps, which restrict the initial number of layers generated in the boundary region, the remeshing procedure should allow for further layers to be added if the gaps between the moving components increase (Hassan et al. 2007). To achieve this, at the initial mesh generation stage, the layer number for each point in the boundary layer is stored. During the remeshing of the holes, the boundary faces at the interface between isotropic cells and the boundary layer cells are identified. If any of these faces has intersected any of the boundary layer cells, then the cell containing the face is added to the list of removed cells and the boundary face list is updated. This will guarantee that no intersecting cells will remain in the case of a narrowing gap due to the movement of the boundary components. If the layer number of the node of a face at the interface is less than the desired number of layers, further layers are grown from the face and the boundary face list is updated. Using the final boundary face list, the remainder of the hole is remeshed using the isotropic Delaunay triangulation. For any added boundary layers, tetrahedral elements are merged to form hexahedral, prism and pyramid elements.
This procedure has been used to simulate the transient separation of a generic shuttle vehicle and a rocket booster. An initial turbulent steady-state simulation is performed, at a free stream Mach number of 0.85 and zero degrees angle of attack, with a Reynolds number of 1.0×107. The mesh employed for this portion of the simulation contains 2.9 million cells. The two bodies then begin to separate according to a prescribed translation in the symmetry plane, together with a prescribed rotation of the shuttle about a point on its tail. The computation proceeds until the shuttle rotates through 15 degrees. This requires 20 local remeshing steps and 300 multigrid cycles per step. High-quality meshes with the appropriate number of layers to resolve the boundary layer region after the widening of the gap between the shuttle and the booster were obtained using the modified adaptation scheme. Views of the surface triangulation on the symmetry plane and the pressure contours on the surface of the shuttle and booster configuration at various stages of the simulation are shown in figure 3.
(a) Solver parallelization
A parallel implementation of the flow solver is achieved by using the single program multiple data concept in conjunction with standard MPI routines for message passing. In the parallel implementation, at the start of each time-step, the interface nodes of the subdomain with the lower number obtain contributions from the interface edges. These partially updated nodal contributions are then broadcast to the corresponding interface nodes in the neighbouring higher number domains. A loop over the interior edges is followed by the receipt of the interface nodal contributions and the subsequent updating of all nodal values. The procedure is completed by sending the updated interface nodal values from the higher domain back to the corresponding interface nodes of the lower domain. The process is implemented in such a way that it allows computation and communication to take place concurrently. Two parallelization approaches can be used in conjunction with the multigrid procedure. The first approach, which allows merging of the control volumes in the same domain in the agglomeration stage, can be termed a local agglomeration. The second approach is a global procedure in which the agglomeration is performed across the parallel domain boundaries. Global agglomeration ensures equivalence with a sequential approach for any number of domains, but increases the communication load in the inner-grid mapping (Sørensen 2002; Sørensen et al. 2003).
(b) Parallel volume remeshing
In the case of a partitioned grid distributed across multiple domains, the regions selected to be remeshed will, generally, also be split over several domains. Unfortunately, this case cannot be handled by the sequential volume remeshing algorithms presented above. An option would be the application of a parallel volume mesh generation algorithm, which fills the holes with elements after the surface parts have been remeshed in parallel. The alternative process is to repartition the mesh with the constraint that only local holes are obtained in the new domains and to process these local holes by the fast and robust sequential algorithms within each new domain (Tremel 2005). If only a single large region is to be remeshed, problems may arise concerning memory requirements and time consumption, because only one processor will be in charge of this task. Then a distributed meshing will be the preferable approach, which avoids memory bottlenecks and reduces the total time required. The other extreme is the case of many small holes, for which the presented approach will be the most efficient solution. Firstly, the amount of time required to repartition the grid to obtain purely local holes is small compared with the meshing time. Secondly, the meshing work is approximately balanced due to the distribution of the holes across all processes. Finally, no communication and synchronization is required during the local remeshing of each hole in parallel. Between both extremes, it will highly depend on the current application and on the implementation itself which of the two approaches performs better in terms of speed and robustness.
The final example presented here is chosen to demonstrate the capability of a fully parallel system, used for the solution of a realistic industrial problem. The simulation of the separation of a JDAM from an FA-18C is considered (Tremel 2005; figure 4). This test case has been selected as a benchmark in the Applied CFD Challenge II (Cenko et al. 2000). The free stream Mach number is 0.96, the angle of attack is 0.46 degrees and the dive angle is 43 degrees. The initial grid consists of 2.1 million points and 12.1 million tetrahedra. The free motion of the JDAM is computed from the pressure field as a rigid body with six degrees of freedom, using subiteration to converge to a fixed position at every physical time-step. The movement of the JDAM is incorporated into the unstructured mesh by a combination of the parallel mesh deformation technique followed by a parallel local remeshing. Each simulation is performed on 24 processors of Linux distributed memory cluster with Intel XEON processors with a clock frequency of 2.67 GHz and 2 GB RAM per node, with 40 physical time-steps performed with four subiterations. An overview of the computed trajectory is shown in figure 4. The time-accurate simulation requires 10.4 h. Figure 5 shows good agreement between the computed trajectory and the flight test measurements. The deviation of the store displacements may be caused by the neglected viscous effects, which are necessary to capture flow separation effects correctly.
A review of the use of unstructured mesh techniques for the solution of the Navier–Stokes equations has been presented. Enhancement to explicit schemes, based on domain decomposition, was presented. This type of algorithm has proved to improve the efficiency and the explicit scheme in particular, when explicit time-stepping is required to maintain time accuracy in particular for high frequency phenomena. When dealing with many low-frequency phenomena, such as flutter, first and second-order implicit schemes are more suitable and more efficient. For such schemes, the geometric conservation laws are required to ensure that the appropriate accuracy is achieved by moving control volumes. For the efficient capturing of moving flow features, mesh enrichment or adaptive remeshing have been shown to be viable options. A combination of mesh movement techniques and local adaptive remeshing procedure are widely used in the case of moving boundary components. Parallelization of the solver and the adaption algorithms has enabled the procedure to be used by the aerospace industry to simulate unsteady inviscid flow around complex industrial geometries. Recent effort to extend the above algorithms to handle hybrid meshes suitable for resolving the viscous boundary layers was highlighted. Further work is required to access usability of such technique in industrial environments. Feature-based adaptation using anisotropic meshes coupled with posterior error analysis in three dimensions is another very demanding topic that requires major research effort.
The authors would like to thank Dr Jane Probert, Dr Kaare Sørensen and Dr Udo Tremel for their valuable contributions to the work presented in this paper.
One contribution of 9 to a Theme Issue ‘Computational fluid dynamics in aerospace engineering’.
- © 2007 The Royal Society