Dahlquist Barrier and Symplectic Integrator For Time-Stepping The Wave Equation

So it’s holiday time and I find some times to write, my objective today would be to present and idea regarding how to use the first and the second Dahlquist barrier theorem to prove some theoretical limit of symplectic integrator.


In this first section we will focus our attention to the numerical method to solve the wave equation in one dimension with Cauchy initial condition, what we mean when saying this is that we are searching a function u:\mathbb{R}\times[0,T) \to \mathbb{R} at least C^2 such that:

\begin{cases}\partial^2_t u = c^2 \partial_x^2 u + f\\u(x,0)= u_0(x) \ \partial_t u(x,0)=\dot{u}0(x) \end{cases}

where f, u_0 and \dot{u}_0 are given and \dot{u}_0 is not necessarily the derivative of u_0. In particular we will call homogeneous wave initial value problem (IVP) the previous equation with f\equiv 0, and we will begin by studying it.


Before dealing directly with the wave equation is interesting to introduce briefly the characteristic method. This is a resolution technique for the first order parabolic partial differential equation (PDE), but we will see that can be applied as well to the wave equation. Let’s consider the advocation equation:

\partial_t u + c\, \partial_x u = 0

we fix the point (x_0,t_0)\in \mathbb{R}\times[0,T) and define the function z(s)= u(x_0+cs,t_0+s). If we define x = x_0+cs and t = t_0+s we can compute:

\frac{dz}{ds} = \partial_s x \,\partial_x u + \partial_s t \,\partial_t u= c \partial_x u + \partial_t u

this tell’s us that the solution to the ordinary differential equation (ODE) \frac{dz}{ds}=0 is equivalent to solving the advocation equation. If \frac{dz}{ds}=0 we know that z(s) is constant and therefore u(x_0+cs, t_0+s)=u(x_0+cs,0) which is equivalent to ask t_0 = -s and therefore:


Now if instead considering the advocation equation alone we consider the following IVP:

\begin{cases}\partial_t u + c\, \partial_x u = 0\\             u(x,0) = u_0(x)\end{cases}

we have the solution to the avocation and is given by the characteristic line equation:


this tell’s as well that the solution to the avocation equation is constant along a line, that we will call characteristic line and has equation x=ct+\eta, for any \eta \in \mathbb{R}.

Now let’s go back to the homogeneous wave IVP, and we can observe that the one dimensional wave equation can be decomposed as follow:

\partial^2_t u - c^2 \partial_x^2 = \Big(\partial_t u + c\, \partial_x u\Big) \Big(\partial_t u - c\, \partial_x u\Big)=0

and therefore we know that the solution of the wave equation is as well solution of both forward and backward advocation equation. Therefore the characteristic line of the wave equation are both x=ct+\eta and x=-ct+\xi. Now our will be to compute \partial\xi \partial_\eta u to prove that is constantly null:

\partial_\eta u = \partial_\eta x\, \partial_x u + \partial_\eta t \,\partial_t u

\partial_\eta u = \partial_x u - \frac{1}{c}\partial_t u

\partial_\xi \Big (\partial_\eta u\Big ) = \partial_\eta \Big(\partial_x u - \frac{1}{c}\partial_t u\Big)=-\partial^2_x u - \frac{1}{c}\partial_t\partial_x u - \frac{1}{c}\Big(-\partial_x\partial_t u - \frac{1}{c}\partial_t u\Big)

and if we substitute in the last equation \partial^2_t u = c^2 \partial_x^2 we get that \partial_\xi \partial_\eta u\equiv 0. Now we can integrate to obtain a nice expression for u:

\int \partial_\xi \partial_\eta u \; d\eta=\partial_\xi u + C(\xi)
\int \partial_\xi u + C(\xi)\; d\xi = u + C(\xi)\xi + D(\eta)
u(\xi,\eta)= G(\xi)+F(\eta)

where G(\xi) and F(\eta) are two C^1 function. Now let’s observe that since u(x,0)= u_0(x) we have that u_0(x)= G(x)+F(x) and is as well possible to write down:

\partial_t u(x,t)= cG'(x+ct)-cF'(x-ct)
\dot{u}_0  = cG'(x)-cF'(x)

\int_{x-ct}^{x+ct} \dot{u}_0(s) \;ds = c\Big(G(x+ct)-F(x-ct)\Big)

therefore combining what we have just seen we get the following result:

u(x,t)= \frac{1}{2} \Big(u_0(x+ct)- u_0(x-ct)\Big)+\frac{1}{2c}\int_{x-ct}^{x+ct} \dot{u}0(s) \;ds

this last equation give as an analytical solution to the wave equation and is known as the d’Alembert formula. Analogously a formula for the inhomogeneous wave IVP can be obtained, provided that f is at least C^2:

u(x,t)= \frac{1}{2} \Big(u_0(x+ct)- u_0(x-ct)\Big)+\frac{1}{2c}\int_{x-ct}^{x+ct} \dot{u}0(s) \;ds+\frac{1}{2}\int_{0}^{t}d\tau\;\int_{x-ct}^{x+ct} f(s,\tau) \;ds

Let us focus our attention to the d’Alembert formula, when u_0 and \dot{u}_0 are “nice” function the analytical solution can be computed easily by hand but when the function aren’t nice we can use a wide number of numerical integration technique and quadrature formula to compute the integral in d’Alembert formula. For lack of a better name we will call here this class of solvers d’Alembert solvers.
We will here show the result of implementing a d’Alembert solvers that computes integral in the d’Alembert formula using equi-spaced node and the trapezoid quadrature formula, but is important to notice that far better result can be obtained using higher order Newton-Cotes methods, Gaussian quadrature etc.

Time Stepping

We will here try to explain a different approach, known with the name of time stepping. Let’s define an equi-spaced spatial mesh \{x_j\}, the time stepping method consist in approximating numerically \partial^2_x u using a stiffness matrix K, ie:

\partial^2_x \vec{U} \approx K \vec{U}

where \vec{U} is a discretisation of the u according to the spatial approximation chosen. In this report we will only treat the finite difference as a spatial approximation method, more information regarding finite difference can be found in the second part of this report. Now once we observe that \vec{U} still depends upon time, ie \vec{U}=\vec{U}(t) is easy to see that the IVP equation becomes:

\begin{cases}\partial_t^2 \vec{U}(t) = K \vec{U}(t)\\\vec{U}(0) = \vec{U}_0\end{cases}

which is am typical numerical ODE problem of second order. One might think about splitting the second order ODE system into a system of two first order ODE to be solved coupled using method such as Euler, Crank-Nicolson, Runghe-Kutta etc etc. This certainly is a valid idea that we will explore in the third part of our report.

Leapfrog Integration

Our aim here will be to address the problem of solving the IVP using second order ODE integration technique. In particular this section focuses on the Leapfrog integration. Let’s introduce to begin with an equi-spaced time mesh \{t_m\} of time step h_t and adopt the following notation \{\overset{\sim}{t_m}\}, to define an other mesh of step \frac{1}{2}h_t such that t_m = \overset{\sim}{t_m} . Now we are ready to define the Leapfrog integration scheme:

\begin{cases}\overset{\sim}{a}_{m}= K\overset{\sim}{u}_{m}\\ \overset{\sim}{v}_{m+\frac{1}{2}}= \overset{\sim}{v}_{m-\frac{1}{2}}+\overset{\sim}{a}_{m}h_t\\ \overset{\sim}{u}_{m+1}= \overset{\sim}{u}_{m-1}+\overset{\sim}{v}_{m+\frac{1}{2}}h_t \end{cases}

If we locate the Leapfrog integration on the \{t_m\} mesh by observing that since t_m = \overset{\sim}{t_m} then \overset{\sim}{a}_{m}=a_m, \overset{\sim}{u}_{m}=u_m and that the following Taylor expansions holds:

\overset{\sim}{v}_{m+\frac{1}{2}} = v_m + \frac{1}{2}\overset{\sim}{a}_{m}h_t + \O (h_t^2)

\overset{\sim}{v}_{m+\frac{1}{2}} = v_{m+1} - \frac{1}{2}\overset{\sim}{a}_{m+1}h_t + \O (h_t^2)

\overset{\sim}{v}_{m-\frac{1}{2}} = v_m - \frac{1}{2}\overset{\sim}{a}_{m}h_t + \O (h_t^2)

we get the following equation to describe the Leapfrog integration:

\begin{cases}a_m = Ku_m\\ v_{m+1}= v_m+\frac{1}{2}h_t(a_m+a_{m+1})\\u_{m+1} = u_m + v_m h_t + \frac{1}{2} a_m h_t^2\end{cases}

Our aim is here to study the stability of the Leapfrog integration using a the theory developed in B1. Our aim will be to see Leapfrog integration when solving the second order differential equation:

\partial^2_t \vec{U} = K\vec{U}

as way of solving the system of differential equations:

\partial_t \vec{y} = A \vec{y}\qquad \vec{y}=\begin{bmatrix}\vec{u}\\ \vec{v}\end{bmatrix} \qquad A=\begin{bmatrix}0 & I\\K & 0\end{bmatrix}

where v=\partial_t u. We know that equation that characterize the leapfrog method, and we can see the second equation as being the one solving the second equation of the first order formulation while the third equation solves the first equation of the first order formulation.
In particular is important to notice that while the second equation of the first order formulation dosen’t present particular problems the first one can be reformulated to obtain component by component a set of differential equation of the form z_i' = \lambda z_i.
The idea behind this that if we suppose our stiffness matrix is diagonalizable we can rewrite \partial_t^2 \vec{u} = K\vec{u} as:

\partial_t^2 \vec{z} = \Lambda \,\vec{z} \qquad \vec{z}= Q^{-1}\vec{u}

where \Lambda is the digitalization of A by the matrix Q, and this means that we can study instead of system of equation that describe the first order formulation the following formulation:

\partial_t \vec{y} = \overline{\Lambda}\,\vec{y}\qquad\vec{y}=\begin{bmatrix}\vec{z}\ \vec{v}\end{bmatrix}\qquad \overline{\Lambda}\, = \begin{bmatrix}0 & I \\\Lambda & 0\end{bmatrix}

now we rewrite the Leapfrog scheme equations as a linear multi step method (more information can be found in B1):

\begin{cases}u_{m+1}=(1+\frac{1}{2}\lambda_jh_t^2)u_m+ h_tf_m^{(1)}\\v_{m+1}=v_m+ h_t\frac{1}{2}(f_m^{(2)}+f_{m+1}^{(2)})\\\end{cases}\qquad f_m=\begin{bmatrix}v_m\\Ku_m\end{bmatrix}

supposing that the eigenvalues of K are negative we get that both the linear multi step method respect the roots criteria, since they have characteristic polynomials:

p_1(x) = r-\Big(1+\frac{1}{2}\lambda_j h_t^2\Big)
p_2(x)= r-1

and both polynomials have roots in the unit disk.
Therefore the Leapfrog scheme is 0-stable but what we are interested to check now is if it is as well absolutely stable, as defined in B1 which is different from being A-stable, as defined by Hairer-Lubich and as well in B1.
To study the Leapfrog scheme we will notice that the first equation of that describe the Leapfrog scheme can be obtained by applying the explicit Euler scheme to the problem:

\partial_t u = \gamma u + f(u)\qquad \gamma = \frac{1}{2}h_t\lambda_j

and under suitable hypothesis on f, which we are not going to present in detail here, we need for the solution not to explode with t\to \infty that:

h_t\gamma \in \mathbb{C}^{-} \qquad 0 < h_t \leq \frac{2Re(\gamma)}{|\gamma|^2}

Last but not least we would like to observe that if as stiffness the finite difference matrix in 1D (more information can be found in the second part of this report) we know that its eigenvalues are:

\lambda_j = \frac{4}{h_x^2} \sin^2 (\theta_j) \qquad \gamma = 4\frac{h_t}{h_x^2} \sin^2 (\theta_j)

is clear that \gamma h_t is in the negative portion of the complex plane and the second requirement mentioned in the definition of A-stability is tight if \theta_j=\frac{\pi}{2} which is equivalent to ask:

0 < h_t \leq \frac{h_x^2}{h_t}

which turns out to be the Courant–Friedrichs–Lewy condition.

Sympletic Integration

In this section our aim is to present a wide class of methods, among which we could find as well the Leapfrog integration scheme. The class of methods here presented is known as sympletic integrator and have the peculiar property of conserving some sort of numerical energy.
If we image that \partial^2_t \vec{U} = K\vec{U} describes the motion of particles in space, we introduce the Hamiltonian of this system and Hamilton’s equations:

\begin{cases}\partial_t p = -\frac{\partial \mathcal{H}}{\partial q}\\\partial_t q = -\frac{\partial \mathcal{H}}{\partial p}\end{cases}

now we would like to integrate numerically the above mentioned system of equation, this means that the numerical solution of the above system will be associated to an perturbed Hamiltonian, which is conserved since solution to the Hamilton equations conserves \mathcal{H}.
We will here present the same approach as in B2, which is base upon the idea that we can split the Hamiltonian as follow:

\mathcal{H}(p,q)= T(p)+V(q)

now we introduce the vector \vec{z}=[q,p]^T and observe that Hamilton equations can be rewritten as:

\partial_t \vec{z} = {z,\mathcal{H}(z)}

where {\cdot,\cdot} are the Poisson brackets. We now introduce a notation useful in future:

P_{\mathcal{H}}\;\cdot={\cdot,\mathcal{H}(\cdot)}\qquad P_{T}\;\cdot={\cdot,T(\cdot)}\qquad P_{V}\;\cdot={\cdot,V(\cdot)}\qquad

rewriting those equation as \partial_t \vec{z} = P_{\mathcal{H}}\vec{z} we can easily find a solution to the ODE using the exponential matrix, since P_{\mathcal{H}}\;\cdot=P_T\;\cdot+P_V\;\cdot which is a property that comes directly from the assumption that \mathcal{H}(p,q)= T(p)+V(q):

\vec{z}(t)= \exp\Big[ t(P_T+P_V)\Big]\vec{z_0}

we now wont to approximate the above mentioned operator as follow:

\exp\Big[ h_t(P_T+P_V)\Big] = \prod_{i=1}^k \exp(c_i h_t P_T)\exp(d_i h_t P_v)+\mathcal{O}(h_t^k)

if k=1 the problem is quick to solve and painless c_1=d_1=1, for k=2 the coefficients we are searching for are c_1=c_2=\frac{1}{2}d_1=1 and d_2=0. In particular in B3 solutions until n=4 are presented, while in B2 the Baker-Campbell-Hausdorff formula is used to a compute c_i and d_i for any symmetric symplectic integrator.
If we perform a Taylor expansion of the exponentials involved in the previous equations we get that:

\exp(aP_T)=\sum_{n=0}^\infty \frac{(aP_T)^n}{n!}\qquad\exp(aP_V)=\sum_{n=0}^\infty \frac{(aP_V)^n}{n!}

and since:

P_T (\vec{z})=\Big\{{z,T},T\Big\}=\Big\{[\partial_t q,0],T\Big\}=0\\P_V (\vec{z})=\Big\{{z,T},T\Big\}=\Big\{[\partial_t q,0],T\Big\}=0

we end up with the following equations:

\begin{cases}\exp(aP_T) = I + aP_T\\\exp(aP_V) = I + aP_V\\\end{cases}

now if we substitute this recursively in the previous equations we get:

\begin{cases}p_{m+1}=p_m + d_m F(q_m)h_t\\q_{m+1}=q_m + c_m p_{m+1} h_t\end{cases}

which in Lagrangian coordinates becomes:

\begin{cases}v_{m+1}=v_{m} + a_m h_t\\u_{m+1}=u_m + v_{m+1}h_t\end{cases}

and those equations defines recursively a symplectic integrator. For example if we have as coefficents c_1,d_1=1 we have symplectic Euler integration which is defined by the following equations:

\begin{cases}v_{m+1}=v_m+a_mh_t\\u_{m+1}=u_m+v_mh_t + a_mh_t^2\end{cases}

Instead if we choose as coefficients c_1=0,c_2=1,d_1=d_2=\frac{1}{2} we end up with the Verlet integration scheme, which is defined by the following equations:

\begin{cases}v_{m+1}=v_m+\frac{1}{2}a_mh_t+\frac{1}{2}a_{m+1}h_t\\u_{m+1}=u_m+v_mh_t +\frac{1}{2}(a_m+a_{m+1})h_t^2\end{cases}

Last not but least we find out the Leapfrog integration if we choose c_1=1,c_2=0,d_1=d_2=\frac{1}{2}.

We will here call symplectic integrator any integrator built with the technique presented, that can be found as well in B2 and B3.

Dahlquist Barrier

Using the same argument proposed in the previous section, if we rewrite a symplectic scheme as a linear multi step scheme is possible to use the Dahlquist’s second barrier we can find out that no symplectic method can be A-stable or \theta-stable.

In particular we now know that any symplectic integrator as a multi step linear method, therefore we can use some of the instruments developed for the stability of multi step linear method with regard to symplectic integrator. The instruments we are speaking about are the first and the second Dahlquist Barrier:

First Dahlquist Barrier There are no multi step linear method, consisting of q step which have a order of convergence q+1 if q is odd and of q+2 if q is even.

Second Dahlquist Barrier There are no explicit multi step linear method which are A-stable or \theta-stable. Further more there is no multi step linear method which is A-stable and has order of convergence greater then 2.

In particular we’d like to formulate the following theorem, that will be based upon the first and the second Dahlquist barrier.

Theorem There are no symplectic integrator obtained by q-1 iteration of the Negri technique that have a q+1 order of convergence if q is odd and of q+2 if q is even. Further more there is no symplectic integrator that is A-stable and has order of convergence greater then 2.

Proof We will show here just a sketch of the proof that is based on what we have seen previously:

  1. We reduce a symplectic integrator to a system of equations representing a multi step linear method, as we have done for the Leapfrog integration.
  2. We are justified to study equation by equation the stability of the symplectic integrator, as we proven in the Leapfrog Integration paragraph.
  3. We apply the second and the first Dahlquist barrier results to each equation representing a multi step linear method to prove the thesis above mentioned.


B1. Quarteroni, Alfio, Riccardo Sacco, and Fausto Saleri. Matematica numerica. Springer Science & Business Media, 2008.

B2. Yoshida, Haruo. “Construction of higher order symplectic integrators.” Physics letters A 150.5-7 (1990): 262-268.

B3. Negri, F. “Lie algebras and canonical integration.” Department of Physics, University of Maryland, prepffnt (1988).

Leave a Reply

Your email address will not be published. Required fields are marked *