## Abstract

A high-speed algorithm for computing fractional differentiations and fractional integrations in fractional differential equations is proposed. In this algorithm, the stored data are not the function to be differentiated or integrated but the weighted integrals of the function. The intervals of integration for the memory can be increased without loss of accuracy as the computing time-step *n* increases. The computing cost varies as , as opposed to *n*^{2} of standard algorithms.

## 1. Introduction

Fractional differentiations and integrations are now being widely applied in various fields such as physics, industry and other areas [1,2]. Many systems are described by differential equations that include fractional differentiations and fractional integrations. The Riemann–Liouville definition for the *q*(>0)th order fractional time differentiation of a function *f*(*t*) is given by [1,3,4]
1.1where the integer *n*_{q} satisfies the inequality *n*_{q}−1≤*q*<*n*_{q}, and *Γ*(*z*) is the gamma function. The Caputo derivative is defined by
1.2These fractional derivatives are defined as the derivatives after or before a fractional integration defined by
1.3where *p*>0.

Some fractional differential equations (FEs) can be solved analytically with the use of integration transformation methods such as the Laplace transformations and the Fourier transformations. However, in many cases solutions by numerical methods are necessary, especially for nonlinear FEs. At this point, we come to the difficulty inherent in fractional differentiation. We need all the history of *f*(*τ*) between the time *a* and the current time *t* to calculate the fractional derivative of *f*(*t*), because the fractional derivative of *f*(*t*) includes the integration of *f*(*τ*) from *a* to *t*. Owing to this property, the computing cost increases in proportion to *n*^{2} in order to solve an FE for *n* time steps by a numerical method. This property limits researchers from performing large size FEs such as the dynamics of continuous media that include fractional derivatives.

In order to avoid this difficulty, Yuan & Agrawal [5] and Agrawal [6] proposed a method, in which a fractional derivative is decomposed to a set of first-order differential equations. It is well known that the fractional differentiation is derived as a special limit of the generalized Maxwell model. Thus, they approximated the fractional derivative by a set of simultaneous first-order differential equations with adequately distributed decay constants. Singh & Chatterjee [7] adopted this line with the method of Galerkin projection. However, it is necessary to have a large number of first-order equations with a wide range of decay constants in order to solve FEs for large *t*.

There is another method to calculate fractional derivatives. Ford & Simpson [8] proposed a method known as the adapted time step or the nested time step. Based on the scaling property of the integration in a fractional derivative, the intervals of the data points of *f*(*τ*) are increased with the distance from *t*. The adapted time step is applied to viscoelastic problems [9] and a fractional diffusion equation [10]. This method is also applied to fractional space derivatives [11]. Though this method is effective for monotonic functions, applicability to oscillating functions is limited. For the latter functions, the intervals of time steps should be sufficiently smaller than the periods of oscillations in order to recover the functional forms in sufficient approximation. Podlubny *et al*. [12] proposed a method of large steps in their paper of variable time steps. However, at present their method seems to only be effective when the function is given by a simple form in those time steps.

In this paper, we propose a new method of calculation of fractional derivatives and fractional integrals in FEs. It will be shown that if the weighted integrals of *f*(*τ*) are stored as the memory instead of *f*(*τ*) itself, the intervals of data points for integration can be increased even greater than the periods of oscillations without loss of the information on *f*(*τ*). As a consequence, the necessary number of data points for the calculation of fractional derivatives and fractional integrals can be drastically reduced.

In §2, with the use of the Riemann–Liouville derivative, we explain how the fractional derivatives or the fractional integrals are expressed by the weighted integrals of *f*(*t*). The crucial point is to separate the kernel (*t*−*τ*)^{nq−q−1} in equation (1.1) into the product of functions of *t* and functions of *τ*. An example of calculating the weighted integral of *f*(*τ*) is given in §3. It will be shown that the computing cost for solving FEs for *n* time steps increases in proportion to , if the new method for calculating equation (1.1) is used. Two FEs are solved in §4 to demonstrate the validity of the new method. The method is explained for the Riemann–Liouville derivative. However, the present method is directly applicable to the Caputo derivative and the fractional integral.

## 2. Possibility of algorithm for fractional differential equations

In this section, we explain the idea of decreasing the memory in calculating fractional derivatives and fractional integrals. This is accomplished by decreasing the number of divisions in calculating the integrals. We show one possibility of enlarging the interval of integration depending on the distance from the current time *t* with the use of equation (1.1). This idea is similar to that of Ford & Simpson [8]. However, the present model is different from their model in the treatment of the scaling property of kernel of integration. The basic idea is that weighted integrals of *f*(*τ*) are stored as the memory in equation (1.1) instead of *f*(*τ*) itself.

For this purpose, the variable *τ* in the kernel (*t*−*τ*)^{nq−q−1} has to be separated from *t*. Such separation is possible as will be shown below. Equation (1.1) is divided into two parts by a point *c* in the interval (*a*,*t*) as
2.1The point *c* varies with *t*. Therefore, the derivative of the integral of the first term on the right-hand side in equation (2.1) with respect to *t* consists of the derivative of the integrand and the derivative of the upper limit of integration. However, the latter is precisely cancelled out with the derivative of the lower limit of integration of the second term on the right-hand side of equation (2.1) (see appendix A). Therefore, we may treat *c* as a constant in calculating equation (2.1).

Based on the discussion given above, the first term on the right-hand side of equation (2.1) denoted by *I*_{c}*f*(*t*) can be replaced by the following integral:
2.2In the classical method of numerical integration, the interval [*a*,*c*] is divided into sufficiently small intervals. One such small interval is given by (*b*∈(*a*,*c*] and Δ>0; figure 1)
2.3where *b*−Δ≥0. Then the integrand is replaced by an approximated form with the use of, for example, the trapezoidal rule. However, this method needs an increasing amount of data points in proportion to the computing cost.

There is another method of calculating equation (2.3) in FEs. The kernel, (*t*−*τ*)^{−q−1}, in the integral is expanded around *b* as
2.4This expression converges if |*τ*−*b*|<*t*−*b*. Substituting equation (2.4) into equation (2.3) and writing *τ*=*b*−*u*, we have
2.5Because the sum with respect to *k* in equation (2.5) converges absolutely, the order of sum and integration can be changed. It should be noted that the integration of equation (2.5) with respect to *u* has been separated from *t*. Therefore, the factor (*t*−*b*)^{−q−k−1} can be moved out of the integration as
2.6

Equation (2.6) shows that the integral *I*_{c}*f*(*t*) in equation (2.2) is calculated through the weighted integrals *I*_{Δ,k}*f*(*b*) defined by
2.7It is also noted that *I*_{Δ,k}*f*(*b*) does not include *t*. Therefore, *I*_{Δ,k}*f*(*b*) can be calculated whenever *t* exceeds *b* (figure 1). The width of interval Δ is limited by the convergency condition of the series in equation (2.6) as
2.8

Equation (2.6) is not practical for numerical calculation of equation (2.2), because it includes an infinite sum. Therefore, the sum in equation (2.6) is truncated at *k*=*K* as
2.9The expression gives an approximation of *I*_{Δ}*f*(*b*). The truncation error is estimated to be
2.10where ∥*f*(*τ*)∥ is the upper limit of |*f*(*τ*)| and *O*(⋅) is the Landau symbol.

The width Δ of integration in equation (2.7) can be chosen arbitrarily, if condition (2.8) is satisfied. Therefore, the width Δ can be larger than the period of oscillation for oscillatory problems without loss of accuracy. Now the number of divisions for integration is estimated. For concreteness of this discussion, the value Δ/(*t*−*b*)=*ϵ*<1 is fixed. Thus, Δ increases in proportion to *ϵ*(*t*−*b*). The number of divisions per unit length of *τ* is thus ≃1/*ϵ*(*t*−*b*). The total number of divisions is derived by the integration of the number per unit length of *τ* as for large *t*. Thus, the number of integrations *I*_{Δ,k}*f*(*b*) necessary to calculate *I*_{c}*f*(*t*) increases logarithmically with . In the next section, we will give an algorithm for calculating *I*_{Δ,k}*f*(*b*) in equations (2.7) and (2.9) for solving FEs.

Expression (2.9) shows the algorithm described in this section and the next section is applicable to the Caputo derivative by dividing the interval in equation (1.2) like equation (2.1). Then, equation (2.9) is applied to the integral over the interval [*a*,*c*] with the replacement of −*q* by *n*_{q}−*q*. This algorithm is also applicable to the fractional integral of order *p*, if equation (2.9) is applied to the interval [*a*,*c*] with the replacement of −*q* by *p*.

## 3. Algorithm of differentiation

In this section, we demonstrate a method of calculation of equation (2.7) and hence equation (2.6) and equation (2.2). This method is applicable to calculating the fractional derivative increasing order of *t*=*nh*∈[0,*T*], where *h*>0 is a constant. We also show that the memory necessary to calculate fractional differentiation at the *n*th time step is reduced to from *n*. The lower terminal of the fractional derivative is fixed to *a*=0.

### (a) Algorithm of storing memory

The interval [0,*t*] in equation (1.1) with *a*=0 is divided into intervals (*c*_{s+1},*c*_{s}], *s*=1,2,…,*S*−1, and (*c*_{1},*t*] for *s*=0, where *c*_{0}=*t* (figure 2). These intervals are labelled by the right endpoint. The point *c*_{1} is identified with *c*. The label *c*_{S} is fixed to *c*_{S}=0. These intervals are further divided into smaller subintervals of width Δ_{s}. The width of the subinterval in the 0th interval (*c*_{1},*t*] is Δ_{0}=*h*. The number of subintervals in the *s*th interval is denoted by *J*_{s} (figure 3).

In the 0th interval, the values of *f*(*τ*) are stored in memory at the points *j*=0,1,2,…,*J*_{0}−1 and *j*=*J*_{0}, where *t*−*J*_{0}*h*=*c*_{1}. As for the *s*th interval, *s*=1,2,…,*S*−1, the integrations on the subintervals given by equation (2.7) are stored as memory, where Δ is replaced by Δ_{s}. Let the width of the subinterval satisfy the following condition:
3.1Then the integration (2.2) with *a*=0 is given by
3.2where *I*_{s,k}*f*(*c*_{s}−*jΔ*_{s}) is defined by equation (2.7) with the notation change for the present purpose:
3.3In equation (3.2), the sum in the braces gives the integral over the interval (*c*_{s+1},*c*_{s}].

If the summation for *k* is truncated at *k*=*K*, the sum
3.4gives an approximation of *I*_{c}*f*(*t*). The error of is estimated to be
3.5where *ϵ* is defined by equation (3.1).

In the rest of this section, we show a method of calculation of the integrals *I*_{s,k}*f*(*b*) given by equation (3.3). We make use of the fact that the time *τ*=*b* occurs once in the neighbourhood of *t* (figure 1). The method depends on the interval. The integral *I*_{1,k}*f*(*b*) in the 1st interval is calculated using *f*(*τ*) data in the 0th interval, when the current time *t* satisfies the condition (3.1). In the other intervals of *s*>1, *I*_{s,k}*f*(*b*) is calculated from the *I*_{s−1,k}*f*(*b*) data in the (*s*−1)th interval, when *t* satisfies the condition (3.1). The explanation is given for four characteristic conditions for *t*. We define an integer *m*≥2 as Δ_{1}=*mh* and a reference number *M*>0 as the smallest multiple of *m* that satisfies the inequality *m*/*M*<*ϵ*.

(i) In the early stage of solving an FE, in which the current time *t* satisfies the condition Δ_{1}/(*t*−Δ_{1})≥*ϵ* (or *J*_{0}<*M*+*m*), any conventional scheme of fractional differentiation with a constant time-step width may be adopted. Thus, the integral *I*_{s,k}*f*(*b*) is not calculated. At this stage, *c*_{S}=*c*_{1}=0. In the following, an explanation is given for *m*=2. The condition *J*_{0}<*M*+*m* is replaced with *J*_{0}<*M*+2, respectively.

(ii) Let *t* satisfy the condition Δ_{1}/(*t*−Δ_{1})<*ϵ* (or *J*_{0}=*M*+2) for the first time (figure 4*a*). At this point the following integral is calculated with the use of *f*(*τ*) at *τ*=0, *h* and 2*h* (or *j*_{0}=*J*_{0}, *J*_{0}−1 and *J*_{0}−2):
3.6The integral *I*_{1,k}*f*(Δ_{1}), where Δ_{1}=2*h*, is the first integral data that are assigned to the *j*_{1}=0 data in the *s*=1 interval. Thus, we replace the labels as
3.7In the above expressions the symbol (:=) means substitution of the right-hand side into the left-hand side. The number *J*_{0} decreases by 2, i.e. *J*_{0}:=*M* and *J*_{1}:=1.

As will be shown later, the condition Δ_{1}/(*t*−Δ_{1})<*ϵ* is sufficient to ensure the condition (3.1) after creating the new interval of *s*=1 with its endpoint *c*_{1}=Δ_{1} and the width of the subinterval Δ_{1}=2*h*. Similar conditions are imposed on *t* in (iii) and (iv). After *t* exceeds (*M*+2)*h*, the values of *f*(*τ*)=*f*(*t*−*jh*), *j*=0,1,…,*J*_{0}, in the 0th interval are stored in memory. These data are used to calculate the second term on the right-hand side of equation (2.1) in a usual method. They are also used to calculate the integrals *I*_{s,k}*f*(*c*_{s}−*jΔ*_{s}), *s*=1,2,…,*S*−1, *j*=0,1,…,*J*_{s}−1. The integral *I*_{s,k}*f*(*c*_{s}−*jΔ*_{s}) is calculated in two ways. For the 1st interval the method is similar to that given in (ii) as will be shown in (iii). The integral *I*_{1,k}*f*(*c*_{1}−*jΔ*_{1}) is calculated at *τ*=*c*_{1} with the use of *f*(*τ*) in the 0th interval. As for the intervals of *s*≥2, the integrals *I*_{s,k}*f*(*c*_{s}−*jΔ*_{s}) are calculated with the use of the integrals on the (*s*−1)th intervals (see (iv) below).

(iii) Let *t* satisfy the condition Δ_{1}/(*t*−*c*_{1}−Δ_{1})<*ϵ* (or *J*_{0}=*M*+2) (see figure 4*b* with *s*=1). At this point, the following integral is calculated with the use of *f*(*τ*) at *τ*=*c*_{1},*c*_{1}+*h* and *c*_{1}+2*h* (or at *j*_{0}=*M*+2,*M*+1 and *M*, or at *j*_{0}=*J*_{0},*J*_{0}−1 and *J*_{0}−2):
3.8The integral *I*_{1,k}*f*(*c*_{1}+Δ_{1}), where Δ_{1}=2*h*, is a new integral data in the *s*=1 interval. The labels in the 0th and the 1st intervals are replaced by
3.9The last two substitutions reflect the change of the number of subintervals *J*_{s} (*s*=0,1) by the conversion of the data in the 0th interval to the data in the 1st interval.

(iv) As for the *s*th interval (*c*_{s+1},*c*_{s}], *s*=2,3,…,*S*−1 and the point *τ*=0 for *s*=*S*, the integrals *I*_{s,k}*f*(*c*_{s}−*jΔ*_{s}), *j*=0,1,…,*J*_{s}−1, can be calculated with the use of the integrals in the (*s*−1)th interval. Let the current time *t* satisfy the condition Δ_{s}/(*t*−*c*_{s}−Δ_{s})<*ϵ* (or *J*_{s−1}=*M*/2+2) (figure 4*b*). At this point the following integral is calculated:
3.10where Δ_{s}=2Δ_{s−1}. The right-hand side of this expression is calculated with the use of the following integrals in the (*s*−1)th interval (figure 4*b*):
3.11and
3.12Equation (3.10) is reduced to
3.13The proof is given in appendix B. When equation (3.10) is calculated, the labels in the *s*th interval are replaced by
3.14where the relation Δ_{s}=2Δ_{s−1} is used. The label *J*_{s} is increased by 1, whereas *J*_{s−1} is decreased by 2 so that *J*_{s−1}=*M*/2. If *s*=*S*, the subscript for label *τ*=0 is replaced with *S*+1, i.e. *c*_{S+1}=0.

The steps (i)–(iv) have completed the calculation of *I*_{s,k}*f*(*c*_{s}−*jΔ*_{s}) in equations (3.2) and (3.4) for *s*=1,2,…,*S*−1 and *j*=0,1,2,…,*J*_{s}−1 with the use of the data *f*(*τ*)=*f*(*t*−*jh*), *j*=0,1,…,*J*_{0}, that are originally in the 0th interval. The width of the subintervals is given by Δ_{s}=2^{s}*h*.

### (b) Precision

In the method given in the previous section, the width Δ_{s} of the subintervals satisfies the condition Δ_{s}/(*t*−*c*_{s})<*ϵ* for *s*=1,2…,*S*−1. The proof is as follows. As can be seen in steps (ii) and (iii), the width Δ_{1} in the 1st interval satisfies the inequality Δ_{1}/(*Mh*)=2*h*/(*Mh*)<*ϵ*. Therefore, in the 1st interval Δ_{1} satisfies the condition Δ_{1}/(*t*−*τ*)<*ϵ*, because *t*−*τ*≥*Mh*.

The integral *I*_{s,k}*f*(*c*_{s}) of *s*>1 is calculated with the use of old *I*_{s−1,k}(*c*_{s}) and *I*_{s−1,k}*f*(*c*_{s}+Δ_{s−1}) in step (iv) as *I*_{s,k}*f*(*c*_{s}+Δ_{s}), then *c*_{s} is shifted to *c*_{s}+Δ_{s} (see equation (3.14)). The value of *c*_{s} varies with *t*=*nh*, where *n* is the time step. However, the minimum number of subintervals in the *s*th interval is fixed to *M*/2 by the assumption, whereas the width of the *s*th subinterval is Δ_{s}=2^{s}*h*. Thus, the minimum of *t*−*c*_{s} is estimated to be
3.15Therefore, the ratio Δ_{s}/(*t*−*c*_{s}) is estimated to be less than or equal to 2/*M* which is less than *ϵ*.

### (c) Memory for calculation of fractional derivatives

We have demonstrated in the previous sections an algorithm that reduces the data points and the total number of data necessary to calculate fractional derivatives. For simplicity of explanation, the data were combined every two time steps. However, this may not give the most effective timing to combine the data. In this section, we show that the number of data can be further reduced. In step (iv), the integral data at two subintervals in the (*s*−1)th interval are combined to one *I*_{s,k}*f*(*c*_{s}) in the *s*th interval. It will be shown that this method the most memory saving method compared with the method in which integrals of more than two subintervals are combined to *I*_{s,k}*f*(*c*_{s}). In steps (ii) and (iii), every two time steps of *f*(*τ*) are combined to *I*_{1,k}*f*(*c*_{1}) at *c*_{1} with *t*−*c*_{1}=*Mh*. However, for *K*>0, the amount of data is reduced, if *c* is identified with a point that gives a larger *t*−*c*_{1} than that given above. In other words, if the integral *I*_{1,k}*f*(*c*_{1}) is calculated on an interval of width *mh* in steps (ii) and (iii) instead of 2*h* preserving the condition *m*/*M*<*ϵ*, then the minimum of total data occurs at *m* larger than 2.

In order to prove the statement given above, we replace the conditions adopted in steps (ii) and (iii) with the following. Let *m*≥2 be an integer, and let the integer *M* be a multiple of *m*. When the current time *t* (or *J*_{0}) satisfies the condition *J*_{0}=*M*+*m*, *m*≥2, we calculate *I*_{1,k}(*c*_{1}+Δ_{1}) with the use of *f*(*τ*) at *c*_{1}+(*j*−1)*h*, *j*=0,1,…,*m*. We also replace the condition in step (iv) with the following: if *J*_{s−1} at the (*s*−1)th interval satisfies *J*_{s−1}=*m*_{1}+*M*/*m*, *m*_{1}≥2, then the integrals at *m*_{1} subintervals, *I*_{s−1,k}(*c*_{s}+*jΔ*_{s−1}), *j*=1,2,…,*m*_{1}, *k*=0,1,…,*K*, are combined to an integral *I*_{s,k}(*c*_{s}+Δ_{s}) for the new subinterval in the *s*th interval. The method of combining the integrals is similar to that given in §3*a* taking into account the number of combined data.

In these procedures, we impose a condition in order to fix the precision of integral (2.2). The ratio of the width Δ_{s} of subintervals to the minimum of (*t*−*c*_{s}) has to satisfy the following condition:
3.16This condition is satisfied if the minimum of *t*−*c*_{s} is given by
3.17The proof is as follows. The width Δ_{1} of the subinterval in the 1st interval is given by Δ_{1}=*mh*, because the data of *f*(*τ*) at the end of *m* intervals are combined to one integral at *c*_{1}. The ratio Δ_{s}/Δ_{s−1} of the neighbouring subintervals is *m*_{1} for *s*>1, because the integrals of *m*_{1} intervals in the (*s*−1)th interval are combined to one integral at *c*_{s}. Thus, the width of the subintervals in the *s*th interval is given by for *s*≥1. Therefore, the equality in equation (3.16) holds, if the condition (3.17) is satisfied.

The necessary data points in the interval [*c*_{s},*t*] are given with the following considerations. If *t* satisfies , the minimum width of the *r*th interval is obtained from equation (3.17) as for *r*=1,2,…,*s*−1, while the width of the subinterval is . Thus, the minimum number of subintervals of the *r*th interval is (*m*_{1}−1)*M*/*m* independent of *r*. The maximum number of subintervals is (*m*_{1}−1)*M*/*m*+*m*_{1} for the *r*th interval, 0<*r*<*s*, whereas the maximum data in the 0th interval is *M*+*m*+1. The total amount of data, *N*_{mem}, necessary for the *K*th approximation is thus given by
3.18If *s* is replaced by *S*, *N*_{mem} is the necessary number of memories prepared for calculation in the interval [0,*T*].

First, we show that, for fixed *m*, *N*_{mem} is minimized at *m*_{1}=2. The total number of time steps *n* in the interval [*c*_{s},*c*_{1}] is approximately given by from equation (3.17). This value gives an approximation of the number of time steps in the interval [*c*_{s},*t*] for large *n*. The exponent *s*−1,
3.19is introduced into equation (3.18) to give
3.20where *ϵ*_{0}=*m*/*M* is a fixed value. The ratios and increase monotonically for *m*_{1}≥2. Therefore, the total number of data is the minimum at *m*_{1}=2 for fixed *m*.

Substituting *m*_{1}=2 in equation (3.20), we have
3.21In order to find *m* for minimum of *N*_{mem}, this expression is differentiated with respect to *m* under the assumption that *m* is real:
3.22and vanishes at
3.23Thus, *N*_{mem} is minimum when the integer *m* is nearest to *m*_{min}.

The amount of saved data is estimated as follows. We write *m*=2^{ν+1} and *M*=*M*_{2} for *m*=2, where *ν* has the meaning of the reduced number of intervals by adopting *m* subintervals for integration of *f*(*τ*) at *c*_{1} instead of *m*=2. It is natural to assume that the equality *M*/*m*=*M*_{2}/2 is satisfied. This equality ensures the same accuracy of integration among different *m*. Thus, we have *M*=*M*_{2}⋅2^{ν}. For *m*=2, the total number of data is given by
3.24from equation (3.18) with *m*_{1}=2. The total number of data for arbitrary *m* necessary to calculate fractional derivatives is given by
3.25The difference between the two expressions using *M*/*m*=*M*_{2}/2 is given by
3.26The difference depends on *K*, *m* and *M*_{2}/2, while it is independent of the time steps.

Finally, we estimate the total number of data for fixed precision. Let the precision *ϵ*_{T} be fixed as
3.27Let *K*_{ϵT} be the smallest integer that satisfies this condition, i.e.
3.28When the right-hand side of this expression is substituted into equation (3.25), the least number of data for fixed precision is estimated to be
3.29at *t*=*c*_{s}.

Equation (3.20) with *m*_{1}=2 is rearranged to show that the total number of data increases logarithmically with *n*=2^{s−1}*M* for *n*/*M*≫1:
3.30The total data necessary to calculate the derivative is reduced to at the 1280th time step (*s*=5), at the 10 240th time step (*s*=8) and is reduced to at the 160 000th time step (*s*=12) for *K*=2, *m*=2 and *M*=40.

## 4. Numerical examples of new algorithm

In this section two examples of FEs are solved by means of the new algorithm of fractional differentiation. The approximated equation (3.4) is adopted for the first term in equation (2.1). The second term in equation (2.1) is written as
4.1where is the (*n*_{q}−*q*)th order fractional integration of *f*(*t*) defined by
4.2For 0<*q*<1, the fractional differentiation by the central difference method is given by
4.3

The numerical integration of is calculated by the trapezoidal rule. Equation (4.3) gives the derivative at *t*−*h*/2. Thus, interpolation or extrapolation is necessary to find the derivatives at *t*−*h* or at *t*. We adopt *K*=2 for equation (3.4), and *M*=40 and *m*=2 for the division of the second term of equation (2.1).

### (a) The fractional Voigt equation

First, we solve the fractional Voigt equation (the fractional relaxation equation) defined by
4.4for 0<*q*<1 with the initial condition
4.5The lower terminal of the fractional derivative is replaced by *a*=0, owing to the initial condition. The Riemann–Liouville definition (1.1) can be replaced by the Caputo definition, , due also to the initial condition. Thus, equation (4.4) is rewritten as
4.6Differentiating both sides of equation, we have
4.7where the right-hand side is the (1−*q*)th order Riemann–Liouville derivative of (*f*−*kx*). Equation (4.7) is solved for and *k*=*f*(*t*)=1 with *h*=0.01. The numerical simulation was performed in C^{++} on an iMac. Figures were created using Mathematica.

In figure 5, the solution by the numerical method is compared with the analytic solution that is given by
4.8over 2^{17}∼130 000. The numerical solution (the dotted line) does not reproduce the analytic solution (the solid curve) as well for the initial several time steps. The discrepancy comes from the discontinuity of the derivative of *x*(*t*) at *t*=0. The numerical method assumes continuity of derivatives. Therefore, the solution does not coincide with the analytic solution at discontinuous points. After several time steps, the numerical solution follows the analytic solution. The numerical error is less than 0.5 per cent after the initial several time steps.

In order to reduce the numerical error in the early stage, we tried another method, in which the analytic solution is used for the initial two time steps. After these time steps the equation is solved by the numerical method. The open circles in figure 5 show the solution by this method. The solution coincides well with the analytic solutions including the early stage.

### (b) The Bagley–Torvik equation

We give another example, the Bagley–Torvik equation,
4.9with the initial conditions
4.10The numerical solution is obtained for and *μ*=*k*=*f*_{0}=*ω*=1 with *h*=0.05. In figure 6, the numerical solution is compared with the analytic solution for two periods of oscillation around *t*=8190 after 1300 oscillations. The asymptotic form of the analytic solution is derived from the Fourier transformation, which is a good approximation for large *t*. Assuming that *f*(*t*)=*f*_{0}e^{iωt}, the solution of the type *x*=*x*_{0} e^{iωt} is obtained:
4.11The solution of equation (4.9) is obtained from the imaginary part of equation (4.11) as the asymptotic solution. The numerical solution coincides well with the analytic solution. The numerical error does not change appreciably with ∼2×10^{−4} throughout the computation.

In table 1, the computing times of calculating equation (4.9) are compared among algorithms. In the table, ‘New Algo.’ means the new algorithm, ‘Trapez.’ means the use of trapezoidal rule in calculating the integral in equation (1.1) and ‘G.L.’ means the numerical scheme of the Grünwald–Letnikov derivative. The use of trapezoidal rule is time consuming compared with the Grünwald–Letnikov method, because it needs much computing time in calculating power functions in equation (1.1). The new algorithm also includes the calculation of power functions (equation (3.3)). Nevertheless, the speed of computation of the new algorithm is faster than that of the Grünwald–Letnikov method for time steps greater than 2500. The former is six times faster than the latter at the 20 000th time step. The speed of computation is increased by ≃1.4 times, if a set of parameters of *M*=20 and *K*=3 is adopted preserving the accuracy instead of *M*=40 and *K*=2.

The speed of computation is increased further, if the values of the power function (*hi*)^{−q}, *i*=0,1,2,… (maximum time step), are tabulated. In table 2, the computing times are compared among the algorithms. In the Grünwald–Letnikov scheme, the coefficients of the difference quotient are tabulated. The speed of computation is increased in every algorithm. Furthermore, even at 2500 time steps, the computing speed of the new algorithm is about 2.7 times faster than that of the Grünwald–Letnikov scheme, and four times faster than the trapezoidal scheme. In fact, the speed of computation exceeds that of the Grünwald–Letnikov scheme before 1000 time steps. The new algorithm is effective for repeating computations and simultaneous calculations of fractional derivatives of many elements such as in the finite element method, etc.

## 5. Concluding remarks

We develop a new algorithm for fractional differentiations and fractional integrations. This method is especially effective in solving FEs. In this method, it is not a function to be differentiated (or integrated) that is stored as the memory, but rather, the weighted integral of the function. The computational time increases as with the computing time step *n*.

The explanation of the algorithm is given with the use of the Riemann–Liouville derivative. However, the algorithm is directly applicable to fractional integrations, the Caputo derivative, and other types of fractional derivatives. In applying the method to the fractional integration, the order −*q* is replaced by *p* in equation (2.2) for the interval [*a*,*c*]. As for application to the Caputo derivative, the replacements and are to be made.

The algorithm proposed in this paper is based on the scale free property of the power-law function. In fact, the accuracy of the Taylor expansion with a finite order (or other type expansion) of the power-law function is unchanged, if the width of the expansion Δ, the centre of expansion *b* and the current time *t* have the relation Δ/(*t*−*b*)=constant<1. Thus, for a fixed point *b*, the width of the Taylor expansion may be indefinitely large, as time elapses under the condition of constant Δ/(*t*−*b*)<1. The present algorithm may also be applicable to the integrals with other type kernels. In this case, the interval of expansion of the kernel has to be small compared with the characteristic scale of the kernel.

## Appendix A. Treatment of the upper and lower limits of equation (2.1)

The time derivative of the first term on the right-hand side of equation (2.1) is given as follows. The time derivative is given by
A1In order to calculate the contribution of the lower limit of integration of the second term on the right-hand side of equation (2.1), the integration is separated into two parts by a fixed point *b*:
A2The time derivative of the first term on the right-hand side of this expression are given by
A3The second term on the right-hand side of equation (A1) is cancelled out by the second term on the right-hand side of equation (A3).

The higher order derivatives of the integrals of equations (A1) and (A3) are also composed of the derivatives of the integrands and the derivatives of the upper limits and the lower limits of integrals in equations (A1) and (A3), respectively. However, all the derivatives of the upper limit of equation (A1) and lower limit of equation (A3) are cancelled out. Thus, the remaining contributions are given by
A4for the first term on the right-hand side of equation (2.1), and
A5of equation (A2), respectively. Equation (A4) is reduced to *I*_{c}*f*(*t*) given by equation (2.2). Equation (A5) is equated to the first term on the right-hand side of equation (A2), if *c* is constant. Therefore, is written as
A6for variable *c*, as if *c* is constant.

## Appendix B. Derivation of equation (3.13)

In this appendix, the new weighted integral data of the *s*th interval are shown to be obtained from the weighted integrals in the last *m*_{1} subintervals of the (*s*−1)th interval. Equation (3.13) is derived as a special case of *m*_{1}=2 of the general formula. The new weighted integral data of the *s*th interval are given as the integral over the interval [*c*_{s},*c*_{s}+Δ_{s}]:
B1Thus, we show that the right-hand side of equation (B1) is given by the sum of the weighted integrals *I*_{s−1,k}*f*(*c*_{s}+*jΔ*_{s−1})=*I*_{s−1,k}*f*(*c*_{s−1}+(*J*_{s−1}−*j*)Δ_{s−1}), *k*=0,1,…,*K*, *i*=1,2,…*m*_{1} (figure 4*b*). For this purpose the integral on the right-hand side of equation (B1) is divided into *m*_{1} subintervals as
B2where
B3The variable *u* is replaced by *v* with the use of the following relation:
B4Then for *ν*>1, equation (B3) is written as
B5The binomial rule
is introduced into equation (B5) to obtain
B6Because *c*_{s}=*c*_{s−1}−*J*_{s−1}Δ_{s−1}, the last expression is reduced to
B7For *ν*=1, equation (B3) is written as
B8Combining equations (B7) and (B8), we have the following formula for the weighted integrals, for *k*=0,1,2,…:
B9Thus, we have the new weighted integral data of the *s*th interval from the weighted integrals in the (*s*−1)th interval. Equation (3.13) is derived from equation (B9) for *m*_{1}=2.

## Footnotes

One contribution of 14 to a Theme Issue ‘Fractional calculus and its applications’.

- © 2013 The Author(s) Published by the Royal Society. All rights reserved.