## Abstract

We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.

## 1. Introduction

We give an overview of a series of recent studies related to some *random* multi-scale problems motivated by practical problems in mechanics. For the sake of simplicity, we argue on a linear elliptic scalar equation in divergence form
1.1
although the scope of the techniques we describe goes beyond this simple setting. The matrix coefficient *A* is assumed random stationary. The purpose is to compute the homogenized matrix coefficient *A*^{⋆} present in the homogenized equation
1.2
which captures the average behaviour of the solution *u*^{ε} to (1.1).

We begin by recalling in §2 the basics of homogenization theory, both in the deterministic (periodic) context and in the random context, which are useful for our exposition. Next, we successively present three different variance reduction techniques for the problem considered. We begin in §3 with the classical, general purpose technique of *antithetic variables*. The efficiency of that technique is substantial, but is also limited in particular because the technique does not fully exploit the specifics of the problem considered. We present in §4 the technique of *control variates*, which requires a better knowledge of the problem at hand. A problem that is simpler to simulate and close to the original problem, in a sense that is made precise below, has to be considered and concurrently solved. The technique uses that knowledge to, effectively, obtain a much better reduction of the variance. In §5, we expose a slightly different approach, imported from solid-state physics, namely that of *special quasi-random structures*. It consists of selecting (somewhat in the spirit of another well-known technique, *stratified sampling*) some configurations of the random environment that are more suitable than generic configurations to compute the empirical averages, so as to again minimize the variance. Finally, §6 presents some further research directions.

Before we proceed, note that we will assume throughout our text that the reader is reasonably familiar with homogenization theory. We refer to the classical textbooks [1,2] for this topic. We also mention [3–5] for general presentations and overviews of the issues examined here, along with some related issues.

## 2. Brief overview of classical homogenization settings

### (a) Periodic homogenization

To begin with, we recall some well-known, basic ingredients of elliptic homogenization theory in the periodic setting. We consider (1.1) in a regular, bounded domain in , and choose the matrix coefficient *A*=*A*_{per} to be symmetric and -periodic. Note that, throughout our text, we manipulate for simplicity *symmetric* matrices, but our discussion in §§3–5 carries over to non-symmetric matrices up to slight modifications.

The corrector problem associated with (1.1) in the periodic case reads, for *p* fixed in ,
2.1
It has a unique solution up to the addition of a constant. Then, the homogenized coefficient in (1.2) reads
where *Q*=(0,1)^{d} is the unit cube and (*e*_{i})_{1≤i≤d} is the canonical basis of . Equivalently, *A*^{⋆} satisfies
The main result of periodic homogenization theory is that, as *ε* goes to zero, the solution *u*^{ε} to (1.1) converges to the *u*^{⋆} solution to (1.2). The convergence holds in and weakly in . The correctors *w*_{ei} may also be used to ‘correct’ *u*^{⋆} in order to identify the behaviour of *u*^{ε} in the strong topology of .

Practically, at the price of only computing *d* periodic problems (2.1), the solution to problem (1.1) can therefore be efficiently approached for *ε* small.

### (b) Stochastic homogenization

Because this is well known, and for the sake of brevity, we skip all technicalities related to the definition of the probabilistic setting (we refer, for example, to [3] for all details). We assume that *A* is stationary in the sense
2.2
(where *τ* is an ergodic group action). We consider the boundary value problem (1.1) for *A*=*A*(⋅,*ω*). Standard results of stochastic homogenization [1,2] apply and allow us to find the homogenized problem. These results generalize the periodic results recalled in §2a. The solution *u*^{ε}(⋅,*ω*) to (1.1) converges to the solution to (1.2) where the homogenized matrix is now defined as
where, for any , *w*_{p} is the solution (unique up to the addition of a random constant) to
2.3
Note that *u*^{ε} is a random function, whereas its homogenized limit *u*^{⋆} is deterministic, because *A*^{⋆} is deterministic.

A striking difference between the stochastic setting and the periodic setting can be observed by comparing (2.1) and (2.3). In the periodic case, the corrector problem is posed on a bounded domain (namely, the periodic cell *Q*), because the corrector *w*_{p} is periodic. In sharp contrast, the corrector problem (2.3) of the random case is posed on the whole space , and cannot be reduced to a problem posed on a bounded domain. The fact that the random corrector problem is posed on the entire space has far-reaching consequences for numerical practice. Truncations of problem (2.3) have to be considered. The actual homogenized coefficients are only captured in the asymptotic regime.

More precisely, the deterministic matrix *A*^{⋆} is usually approximated by the random matrix defined by
2.4
which is obtained by solving the corrector problem on a *truncated* domain, say the cube *Q*_{N}=(0,*N*)^{d},
2.5
Although *A*^{⋆} itself is a deterministic quantity, its practical approximation is random. It is only in the limit of infinitely large domains *Q*_{N} that the deterministic value is attained. As shown in [6], we have
As usual in the random context, the error may be expanded as
2.6
that is, the sum of a *systematic* error and of a *statistical* error (the first and second terms in the above right-hand side, respectively).

A standard technique to compute an approximation of is to consider *M* independent and identically distributed realizations of the field *A*, solve for each of them the corrector problem (2.5) (thereby obtaining from (2.4) i.i.d. realizations , 1≤*m*≤*M*) and compute the Monte Carlo approximation
In view of the central limit theorem, we know that asymptotically lies within the confidence interval
with a probability equal to 95%.

For the sake of simplicity, and because this is overwhelmingly the case in numerical practice, we have considered in (2.5) *periodic* boundary conditions. These will be the conditions we adopt throughout our study. Other boundary conditions, or approximations, may be employed. The specific choice of approximation technique is motivated by considerations about the reduction of the systematic error in (2.6). Several recent mathematical studies by Gloria & Otto [7] have clarified this issue. The variance reduction techniques we present in this paper can be applied to all types of boundary conditions.

## 3. Variance reduction using antithetic variables

We present here a first attempt [8–10] to reduce the variance in stochastic homogenization. The technique used for variance reduction is that of *antithetic variables*.

The variance reduction technique using antithetic variables consists of concurrently considering two sets of configurations for the random material instead of only one set. The two sets of configurations will be deduced from each other. Indeed, fix . Suppose that we give ourselves i.i.d. copies of *A*(*x*,*ω*). Construct next i.i.d. *antithetic* random fields
from the . The map *T* transforms the random field *A*^{m} into another, so-called *antithetic*, field *B*^{m}. The transformation is performed in such a way that, for each *m*, *B*^{m} has the same law as *A*^{m}, namely the law of the matrix *A*. Somewhat vaguely stated, if *A* were obtained in a coin-tossing game (using a fair coin), *B*^{m} would be heads each time *A*^{m} is tails and vice versa. Then, for each , we solve two corrector problems. One is associated with the original field *A*^{m}, the other one is associated with the antithetic field *B*^{m}. Using its solution , we define the *antithetic homogenized matrix* , the elements of which read, for any 1≤*i*,*j*≤*d*,
We finally set, for any ,
Because *A*^{m} and *B*^{m} are identically distributed, so are and . Thus, is unbiased (i.e. ). In addition, it satisfies
because *A*^{m} and *B*^{m} are ergodic. The hope is that the new approximation, , has less variance than the original one, . It is indeed the case under appropriate assumptions.

The approach has been studied theoretically in [8–10], in the one-dimensional setting and in some specific higher-dimensional cases. The approach is shown to qualitatively reduce the variance. A quantitative assessment of the reduction is however out of reach. Only numerical tests can provide some information in this direction.

The tests we performed in [8,10] concern various ‘input’ random fields *A*(⋅,*ω*), some i.i.d., some correlated, with various correlation lengths. In these settings, we have investigated variance reduction on a typical diagonal , or off-diagonal , entry of the approximate homogenized matrix , as well as on the eigenvalues of the matrix and the eigenvalues of the associated differential operator (supplied with homogeneous Dirichlet boundary conditions on ).

Let us give one such example. Consider, in dimension 2, the matrix *A*(*x*,*ω*) defined by
3.1
where *Q*=(0,1)^{2}, is an i.i.d sequence of random Bernoulli variables such that , with *α*=3 and *β*=20. An example of the realization of each matrix field *A*(*x*,*ω*) and *B*(*x*,*ω*) is given in figure 1 (black, the value *α*; pink, the value *β*).

We then compare two computations with identical cost. For this purpose, we first use a classical Monte Carlo method with 2*M* draws (with here 2*M*=100). Second, we apply the antithetic variable technique using only *M* draws. Because we solve two corrector problems for each of the draws (one for *A*^{m} and one for *B*^{m}), the numerical cost is equal to the cost of the classical computation. The results are shown in figure 2, where we can see that the (numerically estimated) variance is reduced.

A more precise estimate of the efficiency of the approach is given in figure 3, in which we have plotted the variance ratio with respect to the size of the computational domain. We see that the gain is not very sensitive to this size, and is at least approximately 6 in this example. This means that, given a computational cost, the approach improves the accuracy by a factor . Equivalently, for a given accuracy, the computational cost is reduced by a factor of 6.

Our numerical results (see [8,10] for comprehensive details) show that the technique may be applied to a large variety of situations and has proved efficient whatever the output considered. In addition, we have shown in [11] that this technique carries over to nonlinear stochastic homogenization problems, when the problem at hand is formulated as a variational convex problem. In all the test cases we have considered, variance is systematically reduced. We observed however that the ratio of reduction is not spectacular. This has motivated the consideration of alternative techniques, expected to be (and indeed observed to be) more efficient than the antithetic variables technique.

## 4. Control variate technique

The *control variate* approach is a variance reduction technique known to be potentially much more efficient than the antithetic variable technique. It however requires better information beforehand on the random quantity of interest that is being simulated. In the context of homogenization, previous studies [12,13] present a first possible investigation of the efficiency of this technique.

The specific setting considered as a control variate is a periodic setting that is slightly perturbed by using a random field modelled by a Bernoulli variable which we now briefly describe, before turning to the variance reduction technique itself.

### (a) Our specific choice of control variate: a perturbation approach

One approach, described in full in [14–16], addressing the random material as a small perturbation of a periodic material consists of considering
4.1
where, with evident notation, *A*_{per} is a -periodic matrix modelling the unperturbed material and *C*_{per} is a -periodic matrix modelling the perturbation. We take
where the are, say, independent identically distributed scalar random variables. One particularly interesting case (see [14–16] for other cases) is when the common law of the is assumed to be a Bernoulli law of (presumably small) parameter *η*,
A formal approach introduced in the above work (which has subsequently been studied and proved correct in [17,18]) to efficiently perform homogenization in that context starts with observing that, in the corrector problem,
4.2
that is, the only source of randomness comes from the coefficient *A*_{η}(*y*,*ω*). Therefore, if one knows the law of this coefficient, then one knows the law of the corrector function *w*_{p}(*y*,*ω*) and therefore may compute the homogenized coefficient , the latter being a function of this law. When the law of *A*_{η} has an expansion in terms of a small coefficient, so has the law of *w*_{p}. Consequently, can be obtained as an expansion. Heuristically, on the cube *Q*_{N}=[0,*N*]^{d} and at order 1 in *η*, the probability of obtaining the perfect periodic material (entirely modelled by the matrix *A*_{per}) is (1−*η*)^{Nd}≈1−*N*^{d}*η*+*O*(*η*^{2}), whereas the probability of obtaining the unperturbed material on all cells except one (where *A*_{η}=*A*_{per}+*C*_{per}) is *N*^{d} (1−*η*)^{Nd−1}*η*≈*N*^{d}*η*+*O*(*η*^{2}). All other configurations, with two or more cells perturbed, yield contributions of order higher than or equal to *η*^{2}. Intuitively (and this intuition can be turned into a mathematical proof) the first-order correction indeed comes from the difference between the material perfectly periodic except on one cell and the perfect material itself,
4.3
where is the homogenized matrix for the unperturbed periodic material and
with
4.4
where is the corrector for *A*_{per} (i.e. the solution to (2.1)) and solves

The approach has been extensively tested. It is observed that, using the perturbative approach, the large *N* limit is already very well approached for small values of *N*. The computational efficiency of the approach is clear: solving the two *periodic* problems with coefficients *A*_{per} and *A*_{per}+**1**_{Q}*C*_{per} for a limited size *N* is much less expensive than solving the original, *random* corrector problem for a much larger size *N*.

When the second-order term is needed, configurations with two defects have to be computed. They all can be seen as a family of PDEs, parametrized by the geometrical location of the defects. Reduced basis techniques have been shown in [19] to allow for a definite speed-up in the computation.

### (b) Variance reduction

We consider again the setting defined by (4.1), except that, now, the parameter *η* of the Bernoulli law is *not* taken as small. The expansion technique employed in §4a is therefore inaccurate. It can however serve for the construction of a control variate, which is useful to reduce the variance.

Determining the field *A*(*x*,*ω*), given by (4.1), on the truncated domain *Q*_{N} amounts to drawing in each cell *Q*+*k* in *Q*_{N}. This allows us to compute the associated (approximate) homogenized coefficient from the solution to the corrector problem (4.2) truncated on *Q*_{N}. In parallel with this task, we *reconstruct* from the specific realization of the set of a field that is used as a control variate. More precisely, we set
4.5
In this formula
where is the deterministic coefficient corresponding to the case of *one* defect located at position *k* in *Q*_{N} (it is actually independent of *k* and equal to *A*_{1,⋆,N} defined by (4.4)). The parameter *ρ* in (4.5) is a deterministic parameter, a classical ingredient of control variate techniques, which is optimized in terms of the estimated variances of the objects at play. It is crucial to note that the expectation of is analytically computable. Because by construction , the technique then consists of approximating the former (thus the latter) by an empirical mean. The theoretical study and the numerical tests in [12] show that the variance of is smaller than that of , and hence that the quality of the approximation is improved.

As an illustration, we use a similar case to that in §3, namely (3.1) with *α*=3 and *β*=23. This case falls within the framework (4.1) with . This is hence not a perturbative setting. Applying the above strategy based on (4.5) provides the results shown in figure 4, where the variance is reduced by a factor close to 6, that is, comparable to the technique of antithetic variables.

It is also possible to use a second-order expansion with respect to *η* in (4.3), and include in the control variate both terms, namely the deterministic coefficients corresponding to the case of one and two defects in *Q*_{N}. Here, one needs additional parameters playing the role of *ρ* above, in order to ensure substantial variance reduction (see the details in [12]). The variance reduction of such a case, of the order of 40, is represented in figure 5.

## 5. Special quasi-random structures

The variance reduction approach for which we now give an overview was originally introduced by other authors for a slightly different purpose in atomistic solid-state science [20–22]. It carries the name SQS, which is the abbreviation for *special quasi-random structures*. The approach was adapted to the homogenization context in [23,13], to which we refer the reader for a more detailed presentation.

### (a) Motivation and formal derivation of special quasi-random structure conditions

In order to convey to the reader the intuition of the original approach, we first consider here a simple one-dimensional setting, which illustrates the difficulties of a generic problem. We consider a linear chain of atomistic sites of two species *A* and *B* that interact by a nearest-neighbour interaction potential *V* _{AA}, *V* _{AB} and *V* _{BB}.

In order to compute the energy per unit particle of that atomistic system, one has to consider all possible such infinite sequences, and for each of them its normalized energy
5.1
where *X*_{i} denotes the species present at the *i*th site for that particular configuration. The ‘energy’ of the system is then defined as the *expectation* of (5.1) over all possible configurations. The approach introduced in [20–22] consists of *selecting* specific truncated configurations (*X*_{i})_{−N≤i≤N} of atomic sites that satisfy statistical properties usually obtained only in the limit of infinitely large *N*.

The first such statistical property is the volume fraction, namely the proportion of species (*A*,*B*) present on average: one only considers truncated sequences (*X*_{i})_{−N≤i≤N} that *exactly* reproduce that volume fraction. Similarly, one may only consider truncated sequences (*X*_{i})_{−N≤i≤N} that, in addition to exhibiting the exact volume fraction, have an average energy *equal* to . And so on and so forth for other quantities of interest.

Mathematically, this *selection* of suitable configurations among all possible configurations amounts to replacing the computation of an expectation by that of a *conditional* expectation.

The above simplistic model can of course be replaced by more elaborate models, with more sophisticated quantities to compute, and more demanding statistical quantities to condition the computations with. The bottom line of the approach remains the same, and we now describe its adaptation so as to construct a variance reduction approach for numerical random homogenization.

To start with, we assume that the matrix-valued random coefficient *A* present in (1.1) reads as
5.2
for some presumably small scalar coefficient *η*, and where we assume that *C*_{0} and *C*_{1} are two stationary, coercive, uniformly bounded matrix fields, that *C*_{0}−*C*_{1} is coercive, and that *χ* is a stationary scalar field with values in [−1,1]. Under these assumptions, the matrix *A*_{η} is stationary, bounded and coercive, uniformly with respect to *ω*. Because *η* is small, *A*_{η} is intuitively a perturbation of the matrix-valued field *C*_{0}(*x*,*ω*).

As already performed above, we may expand all quantities of homogenization theory in powers of the small parameter *η*. In particular, the approximations and of, respectively, the corrector ∇*w*_{η} and the homogenized matrix on the truncated domain *Q*_{N} can be expanded in powers of *η*,
5.3
Inserting these two expansions in (2.5) and (2.4), one can easily see that
and that the random variables , and read as

In line with the motivation we mentioned above in the context of solid-state science, we are now in a position to introduce the conditions that we use to select particular configurations of the environment within *Q*_{N}.

For finite fixed *N*, we say that a configuration *ω* satisfies the SQS conditions of order up to *k* if, for any 0≤*j*≤*k*, the coefficient of the expansion (5.3) exactly matches the corresponding coefficient of the analogous expansion of the exact homogenized matrix coefficient . More explicitly, we speak about the SQS condition of

— order 0 if , that is to say, for any , 5.4

— order 1 if , that is to say, for any , 5.5

— order 2 if , that is to say, for any , 5.6

It is easily observed that using such particular configurations that satisfy the SQS conditions of order up to *k* we have, in the perturbative setting considered here,
5.7
Taking the expectation over such configurations therefore formally provides a more accurate approximation of . Of course, the purpose is to apply the approach *beyond* the perturbative setting. A property such as (5.7) cannot be expected any longer, because the homogenized matrix *A*^{⋆} is no longer a series in a small coefficient that encodes a perturbation. Nevertheless, it can be expected that selecting the configurations using these conditions may improve the approximation, in particular by reducing the variance.

To make the computation of the right-hand sides of the above conditions practical (because in theory they can only be determined using an asymptotic limit, and are therefore as challenging to compute in practice as *A*^{⋆} itself), we restrict the generality of our setting. We assume that, in (5.2), *C*_{0}(*x*,*ω*)=*C*_{0} is a deterministic *constant* matrix, *C*_{1}(*x*,*ω*)=*C*_{1}(*x*) is a deterministic -periodic matrix, and that , where *X*_{k}(*ω*) are identically distributed, not necessarily independent, bounded random variables. For the sake of simplicity, we also assume here that
and refer to [13,23] for more general cases. After a tedious but not complicated calculation (the detail of which is provided in [13,23]), we obtain that the two conditions (5.5) and (5.6) can be rewritten as
5.8
and
5.9
respectively, where and . In these expressions, *ϕ*_{1} is the (unique up to the addition of a constant) solution in to
whereas is the (unique up to the addition of a constant) solution to
The conditions (5.8) and (5.9) are called the SQS 1 and SQS 2 conditions. On the other hand, in the particular setting chosen, condition (5.4) (SQS 0, in some sense) is easily seen to be systematically satisfied when *N* is an integer and the truncated approximation of (2.3) that is chosen is the periodic approximation (2.5).

### (b) Selection Monte Carlo sampling

The classical Monte Carlo sampling consists of successively generating a random configuration *ω*_{m}, solving the truncated corrector problem (2.5) for that configuration, computing , and finally computing the empirical mean as an approximation for *A*^{⋆}.

In our selection Monte Carlo sampling, we systematically test whether the generated configuration satisfies the required SQS conditions, up to a certain tolerance, and reject it if it does not, *before* solving the corrector problem (2.5) for that configuration and letting it contribute to the empirical mean.

In full generality, the cost of Monte Carlo approaches is usually dominated by the cost of draws, and, therefore, selection algorithms are targeted to reject as few draws as possible. In contrast, in the present context where boundary value problems such as (2.5) are to be solved repeatedly, the cost of draws for the configuration is negligible compared with the cost of the solution procedure for such boundary value problems. Likewise, evaluating the quantities present in (5.8) and (5.9) is inexpensive. Therefore, the purpose of the selection mechanism is to limit the number of boundary value problems to be solved, even though this comes at the (tiny) price of rejecting many configurations. We also note that, as for any selection procedure, our selection may introduce a bias (i.e. a modification of the systematic error in (2.6)). The point is to ensure that the gain in variance dominates the bias introduced by the selection approach.

We have studied the approach theoretically in [13,23]. It is shown therein that the estimator provided (at least the simplest variant of our approach) converges towards the homogenized coefficient *A*^{⋆} when the truncated domain converges to the whole space. The efficiency of the approach is also theoretically demonstrated for some particular and simple situations (such as the one-dimensional setting). A comprehensive experimental study of the approach has been completed. In particular, because it is often necessary to enforce the desired conditions only up to some tolerance, we have investigated in [13,23] how this tolerance affects the quality of the approximation and the efficiency of the approach. We have observed that the approach is robust in this respect.

We include here a typical illustration of the efficiency of the approach. We again use a similar case to that in §3, namely (3.1) with and . Considering only configurations that exactly satisfy (5.8), we obtain the results shown in figure 6. It is also possible, among the configurations that exactly satisfy (5.8), to select configurations that satisfy as best as possible the condition (5.9). In practice, we generate 2000 configurations that exactly satisfy (5.8) and select among them the 100 configurations for which the difference between the left and the right-hand sides of (5.9) is the smallest. We then obtain the results shown in figure 7.

In the case considered here (for which the contrast in field *A* is equal to 3), the variance is reduced by a factor of 20 when using configurations that exactly satisfy (5.8), and by a factor of 300 if (5.9) is also enforced. To compare this variance reduction approach with the two previous ones, it is however necessary to consider a case for which the contrast in *A* is similar. In that case, the variance is reduced by a factor of 9 when using configurations that exactly satisfy (5.8), and by a factor of 60 if (5.9) is also enforced.

In all the test cases we have considered (see [13,23] for details), we have observed that the systematic error is kept approximately constant by the approach (it might even be reduced), whereas the variance is reduced by several orders of magnitude. Such an efficiency is achieved at almost no additional cost with respect to the classical Monte Carlo algorithm.

## 6. Related issues and further research

The studies we have reviewed above on different variance reduction approaches definitely show that such approaches may be very beneficial in the context of random homogenization, improving the accuracy while essentially preserving the computational cost. Their efficiency, measured as the actual ratio between the variance of a quantity computed with a direct Monte Carlo approach and that of the same quantity computed using the variance reduction approaches, varies depending upon the amount of information that one has on the problem and that one inserts into the specific variance reduction approach. The antithetic variable approach, a quite generic approach that can be put into action almost without any prior knowledge of the problem being considered, already reduces the variance by one order of magnitude, say, in the best case scenarios. Control variate and special quasi-random structures, both approaches that require exploiting some information on the problem, perform much better. Their efficiency may typically be one order of magnitude larger.

Of course, the efficiency of *all* approaches is sensitive to the contrast present in the original multi-scale problem. In a schematic manner, one may say that the efficiency is, approximately, inversely proportional to the contrast. This is an issue, because practically relevant multi-scale problems may present a high contrast. Fortunately, there is room for improvement in the approaches and several ideas, some of which have already been explored in other contexts in the engineering sciences and some of which have not, have not been pursued yet.

Among possible tracks for further research, we wish to cite a couple of alternative control variate approaches.

A first possible track consists of considering *nonlinear* convex stochastic homogenization problems (such as those considered in [11]), and using a corresponding *linear* problem either as a control variate (in the spirit of the approaches presented in §4) or as a way to select particular configurations (as in §5). We do not detail here the precise construction of this linear model, but rather focus on how to use it in practice. Let be the homogenized energy density of the nonlinear stochastic homogenization problem, and be its approximation computed by considering the nonlinear cell problem on the bounded domain *Q*_{N}. Let be the homogenized matrix of the corresponding linear problem. Our aim is to use as a control variate for . Note however that we do not know the expectation of , and hence we cannot directly use a Monte Carlo algorithm on the random variable
However, computing is expected to be less expensive than computing , because the corrector problem in the former case is linear, whereas it is nonlinear in the latter case. A natural idea is thus to replace, in the above relation, by an empirical mean. This leads us to approximate by a mean of the form
where *M*, (which we expect to be much larger than *M*) and *ρ* are chosen to minimize the variance of the approximation for a given computational cost.

A second track for further research is to use the so-called *bounds*, which are routinely employed in mechanics, in order to build a control variate approach. Given the computational cost for obtaining approximations of *A*^{⋆}, practitioners indeed sometimes choose to avoid computing the actual homogenized coefficients (by solving (2.4) and (2.5)) and concentrate on *bounds* (namely the Reuss, Voigt, Hashin–Shtrikman bounds, …) on the homogenized matrix *A*^{⋆}.

For the sake of illustration, let us briefly review the derivation of the so-called Voigt bound. We assume that the random coefficient *A* is a symmetric matrix. This assumption is critically used in what follows, and more generally in the derivation of many bounds. Under this assumption, the matrix , defined by (2.4), satisfies, for any *p*,
and hence, by choosing *v*=0 in the above problem, we obtain that
The average of *A*(⋅,*ω*) over *Q*_{N} hence provides an upper bound on , which is the so-called Voigt bound.

In the specific case of two-phase composite materials (made of two phases denoted and ), where the random coefficient is given, with obvious notations, by
where *χ* is the characteristic function of the phase , more elaborate bounds have been proposed, including the so-called Hashin–Shtrikman bounds. We refer, for example, to [3] for more details. The idea we are currently pursuing is to use these bounds not as an approximation for , but as a control variate.

## Authors' contributions

All authors have contributed equally to the work.

## Competing interests

The authors declare that they have no competing interests.

## Funding

The work of the C.L.B. and F.L. is partially supported by EOARD under grant no. FA8655-13-1-3061 and by ONR under grant no. N00014-12-1-0383.

## Acknowledgements

The authors thank all their collaborators on the issues presented here and related issues, in particular W. Minvielle (Ecole des Ponts and INRIA).

## Footnotes

One contribution of 11 to a theme issue ‘Trends and challenges in the mechanics of complex materials’.

- Accepted October 14, 2015.

- © 2016 The Author(s)

Published by the Royal Society. All rights reserved.