## Abstract

We study the perturbative power series expansions of the eigenvalues and eigenvectors of a general tridiagonal (Jacobi) matrix of dimension *d*. The (small) expansion parameters are the entries of the two diagonals of length *d*−1 sandwiching the principal diagonal that gives the unperturbed spectrum.

The solution is found explicitly in terms of multivariable (Horn-type) hypergeometric series in 3*d*−5 variables in the generic case. To derive the result, we first rewrite the spectral problem for the Jacobi matrix as an equivalent system of algebraic equations, which are then solved by the application of the multivariable Lagrange inversion formula. The corresponding Jacobi determinant is calculated explicitly. Explicit formulae are also found for any monomial composed of eigenvector's components.

## 1. Introduction

The problem of solving algebraic equations by series expansions has a long history. In the case of a single polynomial equation, a major cornerstone is the work of Birkeland (1927), where a solution is given through an application of the Lagrange inversion formula (Andrews *et al*. 2000). Later on, an alternative approach was suggested by Mayr (1937) who derived the relevant hypergeometric series as solutions of some PDEs satisfied by the zeros as functions of the coefficients of the polynomial. A modern interpretation in terms of -hypergeometric functions (Gel'fand *et al*. 1989, 1990, 1994) can be found in Sturmfels (2000). The latter approach can also be applied to general systems of algebraic equations.

The main goal of this paper is to derive a complete power series solution of the spectral problem for a finite (*d*×*d*) Jacobi matrix *M*. We consider the off-diagonal matrix elements to be small, so that the whole problem looks like a perturbation of a diagonal matrix. After fixing a normalization of the eigenvector, the problem is reduced to solving a system of *d quadratic* equations for *d* unknowns. Then we transform it into an equivalent *larger* system of special (Lagrange form) 3*d*−5 *cubic* equations, which are then inverted by the application of the multivariable Lagrange inversion formula. The expansion of an arbitrary monomial of the components of the eigenvector is given explicitly in terms of multivariable (Horn-type) hypergeometric series of 3*d*−5 variables. In the special case of an eigenvalue growing from a corner matrix element, the number of expansion variables drops to 2*d*−3.

Consider a tridiagonal (Jacobi) matrix of order *d*(1.1)and the corresponding eigenproblem *M V*=

*Λ*for the eigenvector

**V****and the eigenvalue**

*V**Λ*.

Assume that the off-diagonal elements *β*_{k} and *γ*_{k} are the small parameters of the power expansion and that *α*_{k} are distinct. In the zeroth approximation *β*_{k}=*γ*_{k}=0, the matrix *M* is diagonal, its eigenvalues and eigenvectors being *α*_{k} and *V*^{(k)}, respectively, where the components of *V*^{(k)} can be chosen as . By continuity argument, for the small values of *β*_{k} and *γ*_{k}, the eigenvalues *Λ*_{k}, *k*=1, …, *d*, are distinct and can be numbered in such a way that(1.2)We also choose to normalize the eigenvector *V*^{(k)}(1.3)by the condition(1.4)for its *k*th component. Therefore, the remaining components must vanish in the zeroth approximation(1.5)

The eigenvalue problem (1.3) together with the normalization condition (1.4) produces a system of *d* algebraic (quadratic) equations for the eigenvalue *Λ*_{k} and the free components , *j*≠*k*, of the eigenvector *V*^{(k)}, defining them as algebraic functions of the parameters *α*, *β* and *γ*. The conditions (1.2) and (1.5) fix uniquely the branches of the multivalued algebraic functions for small values of *β* and *γ*.

The problem we solve in the present paper is to find an effective way to construct explicit expressions for the coefficients of the power series expansions for the eigenvalues *Λ*_{k} and the components of the eigenvectors . The available literature on solving systems of algebraic equations by multivariate hypergeometric series (see a review in Sturmfels 2000) focuses mainly on solving the generic algebraic system. We are not aware of any detailed analysis of the particular systems arising from the Jacobi matrix spectral problem. The importance of the latter problem for numerous applications in mathematical physics and, in particular, the theory of quantum integrable systems has been the main motivation of our study. Rather than using more modern approaches, in the present paper we follow the original idea of Birkeland (1927) and use a variant of the Lagrange inversion formula.

We shall use the following variant of the multivariable Lagrange inversion theorem (Good 1960; Gessel 1987). Let boldface letters denote vectors , multi-indices and monomials . The inequality ** q**≥

**is understood component-wise:**

*p**q*

_{i}≧

*p*

_{i}, ∀

*i*. Let denote the coefficient at

*ξ*^{q}in the power series

*h*(

**).**

*ξ**Let* . *Let* , *be formal power series in η, such that ϕ*

_{i}(

**0**)≠0∀

*i. Then the system of D equations*(1.6)

*defines uniquely*,

*as formal power series in*.

**ξ***In addition, let Χ*(** η**)

*be a multiple Laurent series, i.e*.

*is a power series for some*.

*Then the Laurent series expansion for Χ*(

**(**

*η***)),(1.7)**

*ξ**is given by the formula*(1.8)

*where J is the Jacobian*(1.9)

The analytic version of the Lagrange theorem (Good 1960) guarantees that if and are analytic at 0, then the Laurent series (1.7) converges in a punctured neighbourhood around ** ξ**=

**0**.

When applying the Lagrange formula (1.8), the major complication comes from the Jacobian *J*, which may be difficult to compute. Fortunately, for our particular problem, the Jacobian can be calculated explicitly in a relatively compact form.

The paper is organized as follows. In §2 we write down the set of quadratic equations defining the eigenvalue *Λ* and the components *V*_{j} of the eigenvector ** V**, transform them into the form that is convenient for study, identify the combinations of the small parameters which serve as the expansion variables and rewrite the equations again in the form that allows us to apply Lagrange's inversion formula. In §3 we calculate an important component of Lagrange's formula: the Jacobian

*J*.

In §4 we put together all the ingredients of the Lagrange formula and produce explicit expressions for all *Λ*_{k}s and s as finite sums of power series. The number of terms in the sum equals to the number of terms in the Jacobian *J*. All the power series are particular cases of a single universal Horn-type multivariable hypergeometric series, which we denote by *Φ*. In generic situation, the series *Φ* depends on 3*d*−5 expansion variables and *d*−1 integer parameters. In §5 we describe the simplification of our results for the special case *k*=1 (or *k*=*d*) when the eigenvalue *Λ* stems from a corner of the matrix *M*. In §6 we examine a few low-dimensional examples illustrating the general results. Section 7 contains a discussion of possible applications and extensions of our result.

## 2. Lagrange equations

From now on we shall concentrate on studying a single eigenvalue *Λ*_{k} and the corresponding eigenvector *V*^{(k)}, for some fixed value of the index *k* (1.2). We shall change our notation accordingly, to simplify the calculations. Set *r*≡*d*−*k* and , so that(2.1)

Let , and , for , and and , for , so that and . Respectively, let and , for , and and , for . Without loss of generality, we can set . The normalization condition (1.4) implies that . Since *α*_{j} are assumed to be distinct, we have *a*_{j}≠0 and , for *i*≠0.

As a result, the eigenvalue problem (1.3) takes the following form:(2.2)The numbers measure the distances of the selected diagonal element from the corners of the matrix. The cases are slightly special; we shall comment on them in due course, all other cases with are generic.

Expressing *λ* from the ‘central’ (‘zeroth’) row as(2.3)and substituting it into the remaining rows, we get the set of (see (2.1)) quadratic equations(2.4a)(2.4b)where we assume that , *b*_{r}=0 and . Formula (2.3) eliminates the eigenvalue *λ* by expressing it in terms of the two eigenvector's components: *v*_{1} and . From now on, the variables *v*_{i} and will be our only *d*−1 unknowns.

Note that equations (2.4*a*) and (2.4*b*) are invariant with respect to the ‘involution’ (rotation by 180° around the central element)(2.5)which we refer to as the *tilde-symmetry*.

The next step is to rescale the variables and in order to identify the convenient combinations of the expansion parameters. Setting and and denoting the corresponding values of and as, respectively, and , we get the equations(2.6a)(2.6b)which can be solved recursively, yielding(2.7a)

(2.7b)

Let us rescale and by the formulae(2.8a)(2.8b)and rewrite equations (2.4*a*) and (2.4*b*) in terms of , as(2.9a)(2.9b)where we assume and .

Equations (2.9*a*) and (2.9*b*) contain the parameters in specific combinations, which are convenient to use as the expansion parameters. Note that and enter only through the product (same for ). Introduce the variables , and ,(2.10a)(2.10b)(2.10c)(altogether variables) as well as their tilde-analogues(2.11a)(2.11b)(2.11c)(altogether variables). The total number of variables is then .

Equations (2.9*a*) and (2.9*b*) are simplified now to the form(2.12a)(2.12b)where we assume and . Since are small parameters, so are . In the zeroth approximation, we have a set of binomial equations and . The iteration of equations (2.12*a*) and (2.12*b*) with the initial values produces formal power series expansions of *u*_{i} and in the variables .

The tilde-symmetry (2.5) for equations (2.12*a*) and (2.12*b*) takes the form(2.13)

*Important remark*. The variables ** ξ** are, generally speaking, not independent. Indeed, the 3

*d*−5 variables are bound by the relations(2.14)altogether independent relations, which leaves only 2

*d*-3 independent variables. Only in the special case

*r*=0 (or ) (see the end of this section) the variables

**become independent. Nevertheless, when solving equations (2.12**

*ξ**a*) and (2.12

*b*) by power series, it is convenient to treat

**as the set of 3**

*ξ**d*−5 independent variables. The expressions (2.10

*a*), (2.10

*b*), (2.10

*c*), (2.11

*a*), (2.11

*b*) and (2.11

*c*) for

**in terms of can be substituted then into the resulting expansions at the very final stage.**

*ξ*In order to find explicitly the coefficients of the power series for *u*_{i} and , we shall use the Lagrange inversion formula. In the notation of theorem 1.1, the expansion variable vector ** ξ** is composed of the six sets of variables . To meet the theorem's premises, we have to introduce also the vector

**composed, respectively, of the following six matching sets of expandable quantities (2.15a)**

*η*(2.15b)

(2.15c)

Note that the variables cannot be used directly as ** η** because they have non-zero limits for

**→**

*ξ***0**, therefore they have to be completed by small factors. Besides, the number of the variables is only , so we need 2

*d*−4 extra variables to match the number

*D*=3

*d*−5 of expansion variables. Actually, it is more convenient to split the number

*D*differently: , with the first set of 2

*d*−2 variables for the larger system coming only from the two variables,

*u*

_{1}and . Define

*s*

_{i},

*t*

_{i}, and as(2.16a)(2.16b)The second set of

*d*−3 remaining variables is defined as(2.17a)

(2.17b)

It might seem easier to define *w*_{i} to be proportional to *u*_{i} rather than to the ratio of two *u*s with the step 2. However, a little experimenting shows that our choice of *w*_{i} leads to simpler, factorized expressions for the functions *ϕ*_{i}(** η**) in (1.6).

From , it follows that(2.18a)(2.18b)Solving equations (2.17*a*), (2.17*b*), (2.18*a*) and (2.18*b*) recursively, we obtain the following expressions for in terms of *w*_{j} and for in terms of (2.19a)

(2.19b)

*The set of equations* (2.12*a*), (2.12*b*) (2.16*a*), (2.16*b*), (2.17*a*) *and* (2.17*b*) *is equivalent to the set of Lagrange-type* (1.6) *equations*(2.20a)(2.20b)*where*(2.21a)(2.21b)(2.22a)(2.22b)

In the notation of theorem 1.1, we have(2.23)

Consider equation (2.12*a*) for *u*_{i+1},(2.24)Note that by virtue of (2.17*a*). Having rearranged the terms, we get(2.25)From (2.16*a*), it follows that and . Therefore,(2.26)Multiplying the equalities (2.26) for the index *i* and *i*+1, we get(2.27)It remains to replace with from the equality (2.17*a*), and we obtain the equality of the form (see (2.20*a*)), where(2.28)and it is assumed that .

The special case of equation (2.12*a*) for *i*=1,(2.29)has to be treated separately. Similarly to the general case, we replace with using (2.18*a*), and substitute from (2.16*a*) and from (2.16*b*). Then we replace with , using both (2.16*a*) and (2.16*b*). As a result, the equation takes the form(2.30)or, equivalently, the form (see (2.20*a*) for *i*=0), where(2.31)From (2.16*a*) and (2.16*b*), it also follows that(2.32)The remaining half of the equations is obtained by the tilde-symmetry (2.13). ▪

As was said before, the cases of a corner eigenvalue, i.e. when *r*=0 or , and the next one, i.e. when *r*=1 or , are slightly special. When one applies the formulae from the above theorem describing the generic case, i.e. when both , to such cases, one has to remember that

for a corner eigenvalue, say , there are no tilded variables and there are also

*no*variables , so that in this case we have only variables, therefore one must disregard (2.22*a*) and (2.22*b*) entirely and set , , in (2.21*a*), (2.21*b*) andfor the immediate next eigenvalue, say , all variables are present but

*w*, so that one must set in (2.22*a*) and disregard (2.22*b*) entirely, there are variables in this case.

All these modifications are easily seen from the expressions (2.10*a*), (2.10*b*), (2.10*c*), (2.11*a*), (2.11*b*) and (2.11*c*) of the small parameters in terms of the initial small parameters . For a more detailed study of the special cases, see §5.

## 3. Jacobian

In this section only, we use the ordering of variables, which is different from the one used above: , and .

*The Jacobian J defined by* (1.9) *can be expressed as*(3.1)*where*(3.2)*and*(3.3)

Here, we adopt the following agreement: whenever a sum has the upper limit smaller than the lower one, then its value should be taken as 0. Also, any product in a similar case should be taken as 1. The tildes in (3.1) refer to replacing *s*_{j}, *t*_{j}, *w*_{j} and *r* by their tilded versions. As always, we assume that and (see proposition 2.1).

Note that the number of terms in *J* (3.1) is (recall that and ).

Consider the following rows of the matrix (3.4)(3.5)(3.6)(3.7)(3.8)(3.9)First lines of each of the four first subsets above, which correspond to the variables (3.4), (3.5), (3.6) and *t*_{0} (3.7), can be replaced by simpler versions (without changing the determinant) by adding a multiple of the row associated with (3.8) or *w*_{0} (3.9). They become as follows:

After the above replacement, the matrix acquires the following block matrix form:where the matrices *B*, *C* and *D* are(3.10)

Therefore, the calculation of the Jacobian has been reduced to finding an explicit formula for the determinant of a smaller, , matrix *DCB*. Subtraction of the product *CB* does only change two rows of the triangular matrix *D*, those associated with the variables and *w*_{0}. They now becomeandrespectively. The rest of the rows in the matrix *D–CB* are the same as in the matrix *D* above, i.e. they all have only two non-zero elements: one is on the diagonal and the other one is to the left-hand side of it. By adding multiples of the columns, without changing the determinant, we can arrange that all those non-diagonal entries in the selected rows vanish, allowing us to compute the determinant by reducing it down to a 2×2 determinant.

Indeed, multiply the last column of the matrix *D–CB* (look at formula (3.10)) by *w*_{r−2} and add the result to the penultimate column. The last row has now got only one (diagonal) element. Hence, we keep the factor 1/(1+*w*_{r−2}) and reduce the determinant to its minor, removing the last column and the last row. By repeating this process, we shall end up with the expressionwhereThis is apparently equivalent to the statement of theorem 3.1. ▪

## 4. Hypergeometric series

Now we have all the ingredients for the right-hand side of the Lagrange formula (1.8) and can calculate the expansion coefficients. In this section we return to the initial ordering , and .

Let us choose the function *Χ*(** η**) in (1.8) to be a Laurent monomial , .

Define the step function *σ*_{ij} as(4.1)We shall use the binomial, trinomial and quadrinomial coefficients defined for integers *m*, *n*, *p* as(4.2a)(4.2b)Note that the multinomial coefficients are evaluated as(4.3a)(4.3b)for , and and vanish if *m*<0, *n*<0 or *p*<0.

Let , etc. and set(4.4)

We shall also assume that . In the rest of the paper we shall frequently present formulae for the untilded quantities only, assuming, unless otherwise stated, that the tilded versions are obtained by tilde-symmetry.

*The expansion of the monomial* *in* *is given by*(4.5)*where*(4.6)(4.7a)(4.7b)(*and respective tilded versions*).

Using the definitions (2.21*a*), (2.21*b*), (2.22*a*) and (2.22*b*), for , we getwhere we assume .

Using the shorthand notation (4.4), we can write down in the compact form as

Using the step function *σ*_{ij} (4.1), we rewrite the expressions (3.2) for the ingredients of the Jacobian *J* in the following equivalent form:

Substituting the above expressions for *Χ*, *ϕ*^{q} and *J* into the right-hand side of the Lagrange formula (1.8), we get(4.8)where(4.9)(4.10)and and are their tilded versions. In the above formulae it is assumed that .

It remains to take the coefficient of the resulting expression for at the monomial . Using the binomial and trinomial expansions (4.2*a*), we get the expressions (4.7*a*) and (4.7*b*) for the expansion coefficients and . The final expression (4.6) for follows then immediately. ▪

Formula (4.6) shows that the monomial expands into as a finite sum (the number of the terms equals to the number of the terms in the Jacobian *J*) of Laurent series. These Laurent series have the uniform structure and can in fact be expressed in terms of a single standard series.

Let us introduce the function of *D*=3*d*−5 complex parameters ** ξ** and depending on two integer vectors and , altogether integer parameters,(4.11)

Note that, despite the appearance, does not factorize into tilde and no-tilde factors owing to (4.4).

For , the coefficients of the series (4.11) can be written down in terms of the Pochhammer symbols as(4.12)where , etc.

Formula (4.12) characterizes *Φ* as a generalized Appel–Horn-type series: the ratios of adjacent coefficients of the power series are rational functions of the indices (Erdélyi 1953). Note that *Φ*(0)=1, and the series converges in a neighbourhood of 0 (Erdélyi 1953). The function *Φ* can also be viewed as an -hypergeometric function in the sense of Gel'fand *et al*. (1989, 1990, 1994)—we plan to elaborate on this remark in a subsequent paper.

*The expansion of the monomial* *in* *can be expressed in terms of the function* *as follows*:(4.13)*where*(4.14)(4.15)(4.16)*and respectively for the tilded versions. In* (4.16) *it is assumed that* .

Note that the sum over in (4.5) can be safely replaced by the sum over , since the multinomial coefficients in (4.7*a*) and (4.7*b*) vanish for the negative values of the lower indices and thus automatically select the correct limits of summation. Then from (4.5) and (4.6), we can express as the finite sum, each term corresponding to a term in the expansion (3.1) of the Jacobian *J*,(4.17)

(4.18)

Consider the single term of the above sum. From (4.7*a*), it follows that the summation can be restricted to the values of the indices for which the multinomial coefficients do not vanish: , and , respectively for . Let us shift the summation indices, , , , etc., so that the summation runs over ,(4.19)

(4.20)

In doing so, we have used formulae (4.7*a*), (4.14) and the identity . Note that shifting *m*_{j} in produces the term , which is consistent with the convention .

Applying the identity(4.21)to (4.20), we get another expression for ,(4.22)

Repeating the same steps for and and collecting together the obtained expressions, we identify the result as (4.13). ▪

The expansions we are ultimately interested in are those of the monomials in the components *v*_{j} and of the eigenvector ** V** (2.2) or of their rescaled versions

*u*

_{j}and (2.8

*a*) and (2.8

*b*).

*Let* , , *and respectively for* . *The expansion of the monomial* *in* *is given by the formula*(4.23)*where Φ*, , *and ν are the same as in*

*theorem*4.2,

*and*(4.24)

Using formulae (2.19*a*) and (2.19*b*), we can express *u*_{j} as monomials in *x*_{0}, *s*_{0}, *z* and *w*. Thus,(4.25)where(4.26)(4.27)(similarly for the tilded variables). Substituting these values of into the formula (4.16), we get (4.24). Note that . It remains to apply theorem 4.2 to the monomial (4.25). ▪

To obtain the expansion of the monomial in , one needs to express the components *v*_{i} of the eigenvector ** V** in terms of

**using (2.8**

*u**a*) and (2.8

*b*), then apply proposition 4.3 and finally, using (2.10

*a*), (2.10

*b*), (2.10

*c*), (2.11

*a*), (2.11

*b*) and (2.11

*c*), to express

*ξ*in terms of . To get the expansions involving the eigenvalue

*λ*, one has to use (2.3). In particular, the resulting expansions for

*u*

_{i}or give an explicit solution for iterations of equations (2.12

*a*) and (2.12

*b*).

In this paper we study only formal power series expansions. The analytic version of the Lagrange theorem (Good 1960) guarantees that our power series converge in an open neighbourhood around ** ξ**=

**0**. The precise description of the convergence domain of a multivariate power series is, however, a difficult task (e.g. Erdélyi 1953; Gel'fand

*et al*. 1994), and we leave it to further study.

## 5. Corner eigenvalue

As mentioned in the end of §2, the cases of the corner eigenvalue *r*=0 () have some peculiarities. Let us consider the case in more detail (the case *r*=0 can be treated by the tilde-symmetry). We have(5.1)

The tilded variables are absent. Besides, the sequences *y*_{i}, respectively for *t*_{i}, are also absent. The total set of variables ** ξ**=(

*x*,

*z*) has the cardinality , and all the expansion variables are independent, in contrast with the generic case. Respectively,

**=(**

*η**s*,

*w*) and

*=(*

**φ***f*,

*h*), where(5.2)

The expression (3.1) for the Jacobian *J* simplifies to(5.3)and thus contains only *r*+1=*d* terms. The analogue of formula (4.8) is(5.4)where(5.5)

The expression (4.6) for the expansion coefficient becomes(5.6)where(5.7)with *p*_{−1}=|*m*|.

The expansion of the monomial in is given by the analogue of formula (4.23)(5.8)where and are the same as in theorem 4.2; ** π** is given by (4.24); and is defined as the series(5.9)or, for ,(5.10)

## 6. Examples

In this section we illustrate our general results with a few low-dimensional examples.

### (a) The simplest case is *d*=2 (*r*=1, )

The spectral problem is(6.1)

After eliminating the eigenvalue and introducing the rescaled variables , for the single unknown variable *u*, we get the single quadratic equation(6.2)

The branch we are studying is selected by the condition for , and the corresponding solution is(6.3)

In terms of the variable , we get the equivalent Lagrange-type equation(6.4)

The corresponding Jacobian *J* has two terms,(6.5)

The expansion of the monomial , , given by proposition 4.3 is(6.6)where(6.7)

For , the function can be expressed in terms of the Gauss hypergeometric series(6.8)

As a matter of fact, for , the two hypergeometric series in (6.6) can be summed into a single series(6.9)

### (b) Three-dimensional case, corner eigenvalue: *d*=3 (*r*=2, )

The spectral problem,(6.10)after the substitutions , and is reduced to the pair of quadratic equations for the variables *u*_{1}, *u*_{2},(6.11a)(6.11b)where the expansion variables are(6.12)and we choose the branch as .

The equivalent Lagrange-type equations for the variables , and are(6.13)where(6.14)

The three-term Jacobian *J* equals(6.15)

By formula (5.8), the monomial expands in *x*_{0}, *x*_{1} and *z*_{0} as(6.16)where(6.17)or, for ,(6.18)

### (c) Three-dimensional case, middle eigenvalue: *d*=3 ()

The spectral problem,(6.19)after the substitutions , and is reduced to the pair of quadratic equations for the variables *u*, ,(6.20a)(6.20b)where the expansion variables are(6.21)and the chosen branch is as . Note that the variables are bound by the single relation

The equivalent Lagrange-type equations for the variables *s*=*xu*, , and are(6.22)where(6.23)

The five-term Jacobian *J* equals(6.24)

By formula (4.23), the monomial expands in as(6.25)(6.26)where, for ,(6.27)or, for ,(6.28)

## 7. Discussion

In the present paper we have solved a problem of much physical interest. The spectra of finite Jacobi matrices appear in many applications: from orthogonal polynomials and nearest-neighbours interaction models to solvable models of quantum mechanics (Lamé polynomials and Bethe ansatz).

What is left for further study are the questions of convergence domains, differential equations and integral representations for the obtained hypergeometric series, as well as their relation to -hypergeometric functions introduced by Gel'fand *et al*. (1989, 1990, 1994). The approach used in our work can be generalized to multiparameter spectral problems (Sleeman 1978).

Explicit perturbative solutions have also appeared in a different context in the works by Langmann (2004*a*,*b*), notably for the multidimensional spectral problems related to the Calogero–Sutherland and elliptic Calogero–Moser systems.

## Acknowledgments

This work has been partially supported by the European Community (or European Union) through the FP6 Marie Curie RTN *ENIGMA* (contract no. MRTN-CT-2004-5652).

## Footnotes

One contribution of 15 to a Theme Issue ‘30 years of finite-gap integration’.

- © 2007 The Royal Society