I’d like here to share the reoprt I prepared for my final project for the course of Numerical Analysis of PDE here at KAUST. All the code to reproduce the image here presented can be found on this dedicated repo. We are interested in solving numerically the PDE system proposed by Alan Turing in 1952 to model the pattern formation on the tissue of some animals such as zebras, ladybugs etc. In particular we will focus our attention on the following diffusion-reaction coupled PDE system:

(1)

and we can separate the diffusion terms from the reactions term, obtaining,

(2)

where the functions and are clearly defined as:

(3)

To begin we will focus our attention on solving the above coupled system of autonomous ordinary differential equations representing the reaction part of the Turing model. In particular we will solve these implementing the 4th order Runge-Kutta method. First we investigate numerically the stability of the method, to do this we kept fixed the initial conditions , and vary the number of points in the temporal discretization for the values of the parameters and in the first line of Parameters Table. In particular from Figure 1 one observe that the method appear to be stable if we chose more then points in the temporal discretization. Now we investigate the long term behaviour of the quantitates and for different values in Parameters Table. As we can observe from Figure 2, for different values of initial data and of the parameters all the ODEs in the long term engage in an alternating oscillatory behaviour, similar to the one observed in simpler non linear ODE model such as Lotka-Volterra equations. Studying the linearization of the systems in the equilibrium points, one will found that the Jacobian has only negative eigenvalues. In particular numerically we notice that all the equilibrium points of the system are unstable.

0.00225 | 0.0045 | 0.02 | 0.2 | 0.899 | -0.91 | -0.899 |

0.001 | 0.0045 | 0.02 | 0.2 | 0.899 | -0.91 | -0.899 |

0.00225 | 0.0045 | 0.02 | 0.2 | 1.9 | -0.91 | -1.9 |

0.00225 | 0.0045 | 2.02 | 0.0 | 2.0 | -2.0 | -2.0 |

0.00225 | 0.0021 | 3.5 | 0.0 | 0.899 | -0.91 | -0.899 |

0.00225 | 0.0045 | 0.02 | 0.2 | 1.9 | -0.85 | -1.9 |

0.00225 | 0.0045 | 2.02 | 0.0 | 2.0 | -0.91 | -2.0 |

The next step will be to solve the diffusion equation, in particular we implemented a second order five points finite difference scheme to approximate the second derivative. In particular the matrix:

(4)

approximate the one dimensional second derivative with periodic boundary condition, then the approximation of the Laplacian is obtain by the Kronecker product , since the same mesh is taken in the horizontal and vertical direction. Now we would like to compute theoretically how small to chose the time marching step in order to obtain a stable integration in time. In order to answer this question we notice that is a circulant matrix and therefore its eigenvalues can be easily computed,

(5)

Furthermore since circulant matrix commute one has that the eigenvalues of the Kronecker product are given by:

(6)

and this tell us that the eigenvalue of lie on the real line and attains as maximum value and as minimum value . Now for the numerical method to be stable we wont , the marching step in time, to be such that lies with in the stability region for the 4-th order Runge-Kutta method. Now from Figure 3 we notice that this imply that the following inequality has to be verified,

(7)

where is the where the negative real line meets the border of the stability region, marked in red in Figure 3. If we take we get that for the first value in the Table ?? that we should take a time marching step . The simulation of the diffusive phenomena obtain with this numerical scheme can be observed in Figure 4.

Now is time to simulate both the reaction and diffusion term together, the result of this simulation can be seen in Figure 5. Using the 4th order Runge-Kutta method for the running time is approximatively 3 minutes.

To over come the CFL condition and have a faster method we resort to a splitting technique, i.e. we solve first the diffusion part of the PDE using an implicit Euler method. The implicit Euler scheme is absolutely stable therefore we can use much larger time steps. Then we use the solution obtain as initial data to the reaction part of the system and this is solved using the 4-th order Runge-Kutta method previously implemented. This is advantageous because as we have seen in the beginning the time step required to solve in a stable manner \eqref{eq:ReactionODE} is much larger then the one needed to solve in a stable manner the diffusive part of the model. The result of this simulation at , obtained using a time marching step are shown in Figure 6. From Figure 6 we observe that even starting from initial data without any sort of pattern such as the one shown in Figure 4 one obtains solutions with patterns such as the one presented in different animals tissue.

Here will follow the code to obtain all figures presented in this report.

```
import numpy as np
from math import sqrt
import scipy.integrate
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
from scipy import sparse
from scipy.sparse.linalg import eigs
from scikits.umfpack import spsolve #UMFPACK If you work in serial.
from tqdm import trange, tqdm
from nodepy.runge_kutta_method import *
class ODESol:
def __init__(self,timesteps,timestep,U):
self.t = timesteps;
self.h = timestep;
self.y = U;
def RK4(F,T,U0,arg,N):
tt = np.linspace(T[0],T[1],N);
h = tt[1]-tt[0];
U = np.zeros([len(U0),N]);
U[:,0] = U0;
for i in trange(0,N-1):
Y1 = U[:,i];
Y2 = U[:,i] + 0.5*h*F(tt[i],Y1,arg);
Y3 = U[:,i] + 0.5*h*F(tt[i]+0.5*h,Y2,arg);
Y4 = U[:,i] + h*F(tt[i]+0.5*h,Y3,arg);
U[:,i+1] = U[:,i]+(h/6)*(F(tt[i],Y1,arg)+2*F(tt[i]+ 0.5*h,Y2,arg)+2*F(tt[i]+ 0.5*h,Y3,arg)+F(tt[i]+h,Y4,arg))
sol = ODESol(tt,h,U);
return sol;
Table = []
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02,"tau2":0.2,"alpha":0.899, "beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.001,"delta2":0.0045,"tau1":0.02,"tau2":0.2,"alpha":0.899, "beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02,"tau2":0.2,"alpha":1.9, "beta":-0.91,"gamma":-1.9}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":2.02,"tau2":0.0,"alpha":2.0, "beta":-0.91,"gamma":-2}]
Table = Table + [{"delta1":0.00105,"delta2":0.0021,"tau1":3.5,"tau2":0.0,"alpha":0.899, "beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02,"tau2":0.2,"alpha":1.9, "beta":-0.85,"gamma":-1.9}]
Table = Table + [{"delta1":0.00225,"delta2":0.0005,"tau1":2.02,"tau2":0.0,"alpha":2.0, "beta":-0.91,"gamma":-2}]
def Reaction(t,x,parameters):
#The ODE is autonomus so we don't really need
#the depende on time.
u = x[0]; v = x[1]; #We grab the useful quantity.
d1 = parameters["delta1"]; d2 = parameters["delta2"];
t1 = parameters["tau1"]; t2 = parameters["tau2"];
a = parameters["alpha"]; b = parameters["beta"]; g = parameters["gamma"];
du = a*u*(1-t1*(v**2))+v*(1-t2*u);
dv = b*v*(1+(a*t1/b)*(u*v))+u*(g+t2*v);
return np.array([du,dv])
t0 = 0. # Initial time
u0 = np.array([0.5,0.5])# Initial values
tfinal = 100. # Final time
dt_output=0.1# Interval between output for plotting
N=int(tfinal/dt_output) # Number of output times
print(N)
tt=np.linspace(t0,tfinal,N) # Output times
ODE = RK4(Reaction,[t0,tfinal],u0,Table[6],N);
uu=ODE.y
plt.plot(tt,uu[0,:],tt,uu[1,:])
plt.legend(["u","v"])
plt.show();
```

```
import numpy as np
from math import sqrt
import scipy.integrate
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
from scipy import sparse
from scipy.sparse.linalg import eigs
from scikits.umfpack import spsolve #UMFPACK If you work in serial.
from tqdm import trange, tqdm
from nodepy.runge_kutta_method import *
class ODESol:
def __init__(self,timesteps,timestep,U):
self.t = timesteps;
self.h = timestep;
self.y = U;
def RK4(F,T,U0,arg,N):
tt = np.linspace(T[0],T[1],N);
h = tt[1]-tt[0];
U = np.zeros([len(U0),N]);
U[:,0] = U0;
for i in trange(0,N-1):
Y1 = U[:,i];
Y2 = U[:,i] + 0.5*h*F(tt[i],Y1,arg);
Y3 = U[:,i] + 0.5*h*F(tt[i]+0.5*h,Y2,arg);
Y4 = U[:,i] + h*F(tt[i]+0.5*h,Y3,arg);
U[:,i+1] = U[:,i]+(h/6)*(F(tt[i],Y1,arg)+2*F(tt[i]+ 0.5*h,Y2,arg)+2*F(tt[i]+ 0.5*h,Y3,arg)+F(tt[i]+h,Y4,arg))
sol = ODESol(tt,h,U);
return sol;
Table = []
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02, "tau2":0.2,"alpha":0.899,"beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.001,"delta2":0.0045,"tau1":0.02, "tau2":0.2,"alpha":0.899,"beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02, "tau2":0.2,"alpha":1.9,"beta":-0.91,"gamma":-1.9}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":2.02, "tau2":0.0,"alpha":2.0,"beta":-0.91,"gamma":-2}]
Table = Table + [{"delta1":0.00105,"delta2":0.0021,"tau1":3.5, "tau2":0.0,"alpha":0.899,"beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02, "tau2":0.2,"alpha":1.9,"beta":-0.85,"gamma":-1.9}]
Table = Table + [{"delta1":0.00225,"delta2":0.0005,"tau1":2.02, "tau2":0.0,"alpha":2.0,"beta":-0.91,"gamma":-2}]
def laplacian_1D(m):
em = np.ones(m)
e1=np.ones(m-1)
A = (sparse.diags(-2*em,0)+sparse.diags(e1,-1)+sparse.diags(e1,1))/((2/(m+1))**2);
A[0,-1]=1/((2/(m+1))**2);
A[-1,0]=1/((2/(m+1))**2);
return A;
def laplacian_2D(m):
I = np.eye(m)
A = laplacian_1D(m)
return sparse.kron(A,I) + sparse.kron(I,A)
def Diffusion(t,x,parameters):
B = sparse.bmat([[parameters["delta1"]*A,None],[None,parameters["delta2"]*A]]);
return B*x;
m=100
x=np.linspace(-1,1,m+2); x=x[1:-1]
y=np.linspace(-1,1,m+2); y=y[1:-1]
X,Y=np.meshgrid(x,y)
A=laplacian_2D(m)
#Generating the initial data
mu, sigma = 0, 0.5 # mean and standard deviation
u0 = np.random.normal(mu, sigma,m**2);
v0 = np.random.normal(mu, sigma,m**2)
#Plotting the initial data
plt.figure()
U0=u0.reshape([m,m])
plt.pcolor(X,Y,U0)
plt.colorbar();
plt.figure()
V0=v0.reshape([m,m])
plt.pcolor(X,Y,V0)
plt.colorbar();
TIndex = 0;
t0 = 0.0 # Initial time
tfinal = 100 # Final time
dt_output=0.018 # Interval between output for plotting
N=int(tfinal/dt_output) # Number of output times
print("CFL suggested time step",(2.5*(0.0198)**2)/(4*(Table[TIndex]["delta1"]+Table[TIndex]["delta2"])));
#ODE = scipy.integrate.solve_ivp(Diffusion,[t0,tfinal],np.append(u0,v0,axis=0),args=[Table[TIndex]],method='RK45',t_eval=tt,atol=1.e-10,rtol=1.e-10);
ODE = RK4(Diffusion,[t0,tfinal],np.append(u0,v0,axis=0),Table[TIndex],N);
uu=ODE.y
print("Step size of the Method",ODE.h);
print(ODE.h*eigs(sparse.bmat([[Table[TIndex]["delta1"]*A,None],[None,Table[TIndex]["delta2"]*A]]),k=10)[0])
RK44 = loadRKM('RK44')
RK44.plot_stability_region(bounds=[-5,1,-5,5])
plt.plot(ODE.h*eigs(sparse.bmat([[Table[TIndex]["delta1"]*A,None],[None,Table[TIndex]["delta2"]*A]]),k=10)[0].real,ODE.h*eigs(sparse.bmat([[Table[TIndex]["delta1"]*A,None],[None,Table[TIndex]["delta2"]*A]]),k=10)[0].imag,"b*")
ut = uu[0:m**2,-1];
vt = uu[m**2:,-1];
Ut=ut.reshape([m,m])
plt.figure()
plt.pcolor(X,Y,Ut)
plt.colorbar();
plt.figure()
Vt=vt.reshape([m,m])
plt.pcolor(X,Y,Vt)
plt.colorbar();
plt.show();
```

```
import numpy as np
from math import sqrt
import scipy.integrate
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
from scipy import sparse
from scipy.sparse.linalg import eigs
from scikits.umfpack import spsolve #UMFPACK If you work in serial.
from tqdm import trange, tqdm
from nodepy.runge_kutta_method import *
class ODESol:
def __init__(self,timesteps,timestep,U):
self.t = timesteps;
self.h = timestep;
self.y = U;
def laplacian_1D(m):
em = np.ones(m)
e1=np.ones(m-1)
A = (sparse.diags(-2*em,0)+sparse.diags(e1,-1)+sparse.diags(e1,1))/((2/(m+1))**2);
A[0,-1]=1/((2/(m+1))**2);
A[-1,0]=1/((2/(m+1))**2);
return A;
def laplacian_2D(m):
I = np.eye(m)
A = laplacian_1D(m)
return sparse.kron(A,I) + sparse.kron(I,A)
def Reaction(t,x,parameters):
u = x[0:m**2]; v = x[m**2:]; #We grab the useful quantity.
d1 = parameters["delta1"]; d2 = parameters["delta2"];
t1 = parameters["tau1"]; t2 = parameters["tau2"];
a = parameters["alpha"]; b = parameters["beta"]; g = parameters["gamma"];
#Reaction
du = a*u*(1-t1*(v**2))+v*(1-t2*u);
dv = b*v*(1+(a*t1/b)*(u*v))+u*(g+t2*v);
b = np.append(du,dv,axis=0);
return b;
def SplitSolver(F,T,U0,arg,N):
tt = np.linspace(T[0],T[1],N);
h = tt[1]-tt[0];
U = np.zeros([len(U0),N]);
U[:,0] = U0;
for i in trange(0,N-1):
B = sparse.bmat([[arg["delta1"]*A,None],[None,arg["delta2"]*A]]);
UStar = spsolve((sparse.identity(B.shape[0])-h*B),U[:,i])
Y1 = UStar;
Y2 = UStar + 0.5*h*F(tt[i],Y1,arg);
Y3 = UStar + 0.5*h*F(tt[i]+0.5*h,Y2,arg);
Y4 = UStar + h*F(tt[i]+0.5*h,Y3,arg);
U[:,i+1] = UStar+(h/6)*(F(tt[i],Y1,arg)+2*F(tt[i]+ 0.5*h,Y2,arg)+2*F(tt[i]+ 0.5*h,Y3,arg)+F(tt[i]+h,Y4,arg))
sol = ODESol(tt,h,U);
return sol;
Table = []
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02, "tau2":0.2,"alpha":0.899,"beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.001,"delta2":0.0045,"tau1":0.02, "tau2":0.2,"alpha":0.899,"beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02, "tau2":0.2,"alpha":1.9,"beta":-0.91,"gamma":-1.9}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":2.02, "tau2":0.0,"alpha":2.0,"beta":-0.91,"gamma":-2}]
Table = Table + [{"delta1":0.00105,"delta2":0.0021,"tau1":3.5, "tau2":0.0,"alpha":0.899,"beta":-0.91,"gamma":-0.899}]
Table = Table + [{"delta1":0.00225,"delta2":0.0045,"tau1":0.02, "tau2":0.2,"alpha":1.9,"beta":-0.85,"gamma":-1.9}]
Table = Table + [{"delta1":0.00225,"delta2":0.0005,"tau1":2.02, "tau2":0.0,"alpha":2.0,"beta":-0.91,"gamma":-2}]
TIndex = int(input("What pattern you want to simulate ? "))
m=100
x=np.linspace(0,1,m+2); x=x[1:-1]
y=np.linspace(0,1,m+2); y=y[1:-1]
X,Y=np.meshgrid(x,y)
A=laplacian_2D(m)
#Generating the initial data
mu, sigma = 0, 0.5 # mean and standard deviation
u0 = np.random.normal(mu, sigma,m**2);
v0 = np.random.normal(mu, sigma,m**2)
#Plotting the initial data
plt.figure()
U0=u0.reshape([m,m])
plt.pcolor(X,Y,U0)
plt.colorbar();
plt.title("Initial Data u");
plt.figure()
V0=v0.reshape([m,m])
plt.pcolor(X,Y,V0)
plt.colorbar();
plt.title("Initial Data v");
t0 = 0.0 # Initial time
tfinal = 150 # Final time
dt_output=0.3 # Interval between output for plotting
N=int(tfinal/dt_output) # Number of output times
ODE = SplitSolver(Reaction,[t0,tfinal],np.append(u0,v0,axis=0),Table[TIndex],N);
uu=ODE.y
ut = uu[0:m**2,-1];
vt = uu[m**2:,-1];
Ut=ut.reshape([m,m])
plt.figure()
plt.pcolor(X,Y,Ut)
plt.colorbar();
plt.figure()
Vt=vt.reshape([m,m])
plt.pcolor(X,Y,Vt)
plt.colorbar();
plt.show();
```

]]>During the course of Advanced Numerical Methods for PDE we study the Boundary Elements Method (BEM) for the solution of the Sound Soft Scattering Problem, this post is a remake of the report I presented as part of the exam.

We fix the notation that we will use from now on in this report, let be a bounded Liptschitz domain and the complement of its closure in . We require to be simply connected.

Let us recall the Helmholtz PDE, that is gonna be the focus of our report:

We will say that a function is a radiating Helmholtz solution if for a given :

and for .

Given and a function we say that a function is solution to the Exterior Dirichlet Problem associated with and if,

where is the trace operator from , that is defined on and then extended by density.

Last we introduce the definition of Sound-Soft scattering problem that is gonna be the problem we aim to solve numerically and analytically in this report.

Given a function that is a radiating Helmholtz solution on we say that is a solution to the sound soft scattering problem (SSSP) if:

The focus of this report will be to study the SSSP problems both analytically and numerically, in particular using a collocation BEM method using as computational toolbox the Netgen/NGSolve python interface more info. First of all we have to import all the library related to the Netgen/NGSolve environment, we also imported some elements from the native math library, numpy and matplotlib. Last but not least we import NGSolve special_functions library that has to be compiled separately and extend all the SLATEC functions as CoefficientFunctions more info.

```
from ngsolve import * #Importing NGSolve library
import netgen.gui #Importing Netgen User Interface
import netgen.meshing as ms #Importing Netgen meshing library
import ngsolve.special_functions as sp #Importing NGSolve wrapper for SLATEC
from netgen.geom2d import SplineGeometry #Importing 2D geometry tools
#Importing std python library
from math import pi
import numpy as np
import matplotlib.pyplot as plt
```

We want to consider a sound soft scattering problem where is a circle of radius centered in the origin and is a planar wave defined as

In the following code we first define the geometry using Netgen 2D Geometry package and the we draw the planner wave .

```
geo = SplineGeometry()
geo.AddRectangle((1,1),(-1,-1),bc="rectangle") #Adding a square background to the geometry.
########################
# CIRCULAR SCATTERER #
#######################
# In the following lines we define the circular scatterer
N = 40; #Number of segments in which we split the circle
pnts=[];
#We add points to the mesh, when N grows we get closer to a circle
for i in range(0,N):
pnts.append((0.25*cos(2*(i/N)*pi),0.25*sin(2*(i/N)*pi))) #Segments end points
#Mid points to define the correct spline
pnts.append((0.25*cos(2*(i/N)*pi+(pi/N)),0.25*sin(2*(i/N)*pi+(pi/N))))
#We connect the points to form a cirlce using 3 spline.
P = [geo.AppendPoint(*pnt) for pnt in pnts]
Curves = [[["spline3",P[i-1],P[i],P[i+1]],"c"+str(int((i-1)/2))] for i in range(1,2*N-1,2)]
Curves.append([["spline3",P[2*N-2],P[2*N-1],P[0]],"c"+str(N-1)])
[geo.Append(c,bc=bc) for c,bc in Curves]
mesh = Mesh(geo.GenerateMesh(maxh=0.1)) #Meshing ...
#We define the polar coordinates
rho = sqrt(x**2+y**2);
theta = atan2(y,x)
Draw(theta,mesh,'theta')
#We here define a planar wave in the direction d=(0.5sqrt(3),0.5) with k=30.
# u(x) = exp(i(xÂ°d)) = exp(ik*rho*|d|*cos(theta-pi/6))
k = 30;
lam = 1;
def PW(Angle):
pw = exp(1j*k*sqrt(x**2+y**2)*sqrt((0.5*sqrt(3))**2+0.25)*cos(theta-Angle));
return pw;
Uin = IfPos(rho-0.25,PW(pi/6),0)
Draw(Uin,mesh,'Uin',min=-1,max=1.0,autoscale=False)
```

Now we study a preliminary case, we assume to be a circular harmonic on , i.e. for a fixed . In this case a solution of the SSSP is given by a convex combination of , . This because verifies the Helmholtz equation and therefore as well any linear combination of the above mentioned term verify the Helmholtz equation. Furthermore on we have that is equal to:

Therefore we know that for a circular harmonic the solution has the form:

we still have to meet the requirement that is a radiating Helmholtz solution, we expand the Henkel functions for a large argument as:

that only is the radiating components and therefore we take .

Now since we know that the Helmholtz equation is linear, such as the Dirichlet boundary condition and the radiating condition we can state that if we can decompose the in the sum of circular harmonics, i.e. , we have a solution of the SSSP given by:

Let us consider a planar wave that propagates in the direction and that has wave number , we can write its equation as:

we can rewrite the above equation in polar coordinates as follow:

now we can put to good use the Jacobi-Anger expansion, i.e. , to obtain:

the usual form of the Jacobi-Anger is expansion is , but using the identity we can write it the form above. We use this representation because negative index Hankel function presents some problem when wrapped from SLATEC in NGSolve special functions.

We now use the NGSolve toolbox to see if the series representation of is some how accurate, we then use the series decomposition of compute an approximation of and .

```
#We define the MieSeries substituting the coeff of the Jacobi-Anger expansion in the
#series reppresentation of Ust
def MieSeriesPW(Angle):
#First term of the expansion
u = sp.jv(z=k*sqrt(x**2+y**2)*sqrt((0.5*sqrt(3))**2+0.25), v=0);
for l in range(1,50):
#Bessel portion of the expansion
J = sp.jv(z=k*sqrt(x**2+y**2)*sqrt((0.5*sqrt(3))**2+0.25), v=l)
a=cos(l*(theta-Angle)); #Jacobi-Anger coeff
t = (1j**l)*J*a; #Other term of the expansion
u = u+2*t; #Adding the term to the series
return u;
UMie = IfPos(rho-0.25,MieSeriesPW(pi/6),0)
Draw(UMie,mesh,'UMie',min=-1,max=1,autoscale=False) #Drawing the Mie series.
print("L2 error of Mie-Series rappresentation: ",abs(Integrate((UMie-Uin)**2,mesh)))
```

```
def MieSerieST(Angle):
#Hankel portion of the expansion
h = sp.hankel1(z=k*sqrt(x**2+y**2),v=0)/sp.hankel1(z=k*0.25,v=0);
a = sp.jv(z=k*0.25, v=0) #Jacobi-Anger coeff
u = -1*a*h;
for l in range(1,50):
#Hankel portion of the expansion
h = sp.hankel1(z=k*sqrt(x**2+y**2),v=l)/sp.hankel1(z=k*0.25,v=l);
t = -2*(1j**l)*(sp.jv(z=k*0.25, v=l))*cos(l*(theta-Angle))*h; #Jacobi-Anger coeff
u = u+t; #Adding a new terms to the series
return u;
Ust = IfPos(rho-0.25,MieSerieST(pi/6),0)
Draw(Ust,mesh,'Ust',min=-1,max=1,autoscale=False)
Utot = Uin+Ust;
Draw(Utot,mesh,'Utot',min=-1,max=1,autoscale=False)
```

In this section we focus on the SSSP where is a generic polygon, in particular choose as a triangle which has barycenter in the origin. First we have to define the geometry of the problem using Netgen 2D Geometry library.

```
geo = SplineGeometry()
geo.AddRectangle((1,1),(-1,-1),bc="rectangle")
########################
# TRIANGLE SCATTERER #
########################
# In the following lines we define the triangle scatterer
gamma = 0.6; #Scaling factor of the (0,0)-(0,1)-(1,0) triangle.
def l(x):
return -x+1;
M = 10; #Number of segments in which we split each side of the triangle.
#Adding the points to the mesh
pnts=[];
for i in range(0,M):
pnts.append((0,i/M));
for i in range(0,M):
pnts.append((i/M,l(i/M)))
for i in range(0,M):
pnts.append(((M-i)/M,0));
#Shifting the points such that (0,0) is the barycenter
TPnts = [(gamma*(p[0]-(1/3)),gamma*(p[1]-(1/3)))for p in pnts]
P = [geo.AppendPoint(*pnt) for pnt in TPnts]
plt.scatter([p[0] for p in TPnts],[p[1] for p in TPnts])
#Connecting the points using streight segments.
Curves = [[["line",P[i],P[i+1]],"l"+str(int(i))] for i in range(0,len(pnts)-1)]
Curves.append([["line",P[len(pnts)-1],P[0]],"l"+str(int(len(pnts)-1))])
#Adding the midpoints of the segments to the mesh.
pnts2=[];
for j in range(0,3*M):
if j in range(0,M):
pnts2.append((pnts[j][0],pnts[j][1]+(0.5/M)));
elif j in range(2*M,3*M):
pnts2.append((pnts[j][0]-(0.5/M),pnts[j][1]));
else:
pnts2.append((pnts[j][0]+(0.5/M),pnts[j][1]-(0.5/M)))
#Shifting the points such that (0,0) is the barycenter
TPnts2 = [(gamma*(p[0]-(1/3)),gamma*(p[1]-(1/3)))for p in pnts2]
P2 = [geo.AppendPoint(*pnt) for pnt in TPnts2]
plt.scatter([p[0] for p in TPnts2],[p[1] for p in TPnts2],c="r")
plt.scatter(0,0,c="y")
#Connecting the points using streight segments.
Curves = []
for i in range(0,len(pnts)-1):
Curves.append([["line",P[i],P2[i]],"m"+str(2*int(i+1)-1)])
Curves.append([["line",P2[i],P[i+1]],"m"+str(2*int(i+1))])
Curves.append([["line",P[len(pnts)-1],P2[len(pnts)-1]],"m"+str(2*(len(pnts2))-1)])
Curves.append([["line",P2[len(pnts)-1],P[0]],"m"+str(2*(len(pnts2)))])
[geo.Append(c,bc=bc) for c,bc in Curves]
#Plot the mesh points and priting some measures.
plt.scatter(TPnts[0][0],TPnts[0][1],c="g")
plt.scatter(TPnts[M][0],TPnts[M][1],c="g")
plt.scatter(TPnts[2*M][0],TPnts[2*M][1],c="g")
print(TPnts[0][0],TPnts[0][1])
print((TPnts[2*M][1]-TPnts[M][1])/(TPnts[2*M][0]-TPnts[M][0]))
print(sqrt((TPnts[M+1][0]-TPnts[M][0])**2+(TPnts[M+1][1]-TPnts[M][1])**2),(gamma/M)*sqrt(2))
print(sqrt((TPnts[1][0]-TPnts[0][0])**2+(TPnts[1][1]-TPnts[0][1])**2),(gamma/M))
mesh = Mesh(geo.GenerateMesh(maxh=0.5/M)) #Meshing ...
plt.scatter(TPnts2[2*M-1][0],TPnts2[2*M-1][1],c="y")
#Defining polar coordinates
rho = sqrt(x**2+y**2);
theta = atan2(y,x)
Draw(theta,mesh,'theta')
```

To do this we introduce the Helmholtz fundamental solution:

Any linear combination of the form , where , is a radiating Helmholtz solution in . We can take a continuous linear combination, focused on , to obtain a radiating Helmholtz solution:

the operator is called Helmholtz single layer potential.

We similarly define the single layer operator as follow:

to find the potential that satisfy the previous equation we use a collocation method. In a collocation method we fix a discrete space , in particular since is contained in we like for to accommodate discontinuous functions. To build we divide in a number of elements , then we define as:

we can rewrite the second integral equation assuming that lives in , i.e. :

we impose the previous equation on the midpoints of the segments to obtain the following linear system:

where is the midpoint of the segment . Solving the linear system we obtain the vector , and we can use first integral equation to compute the scattered wave:

```
#We here define a planar wave in the direction d=(0.5sqrt(3),0.5) with k=10.
# u(x) = exp(i(xÂ°d)) = exp(ik*rho*|d|*cos(theta-pi/6))
k=30
def PW(Angle):
pw = exp(1j*k*sqrt(x**2+y**2)*sqrt((0.5*sqrt(3))**2+0.25)*cos(theta-Angle));
return pw;
N = 3*M; #Full number of segments
A = np.zeros((N,N),complex) #Init of the stifness matrix
F = np.zeros((N,1),complex) #Init of vector b, Ax=b.
for j in range(0,N):
for i in range(0,N):
#Computing Single Layer Potential with as one variable the mid points.
HelmholtzSLP = CoefficientFunction(0.25*1j*sp.hankel1(z=k*sqrt((TPnts2[j][0]-x)**2+(TPnts2[j][1]-y)**2),v=0))
#Integrating SLP to create the stifness matrix.
A[i,j]= Integrate(HelmholtzSLP,mesh,definedon=mesh.Boundaries("m"+str(2*(i+1)-1)),order=20)+Integrate(HelmholtzSLP,mesh,definedon=mesh.Boundaries("m"+str(2*(i+1))),order=20);
if i in range(M,2*M):
A[i,j] = (1/sqrt(2))*A[i,j]; #Correction factor due to NGSOLVE integration.
mip = mesh(TPnts2[j][0],TPnts2[j][1]); #Defining mid point as mesh element
#j-th components of b is the value of the planaer wave in the mid point
F[j] = PW(pi/3)(mip);
print("Matrix Conditioning: ", np.linalg.cond(A))
Psi = np.linalg.solve(A,F) #Computing x
gamma=0.6 #Scaling factor of the (0,0)-(0,1)-(1,0) triangle.
def SLO(Psi):
#We integrate using a mid point quadrature rule to obtain a function in one variable.
u = 0*x+0*y; #Init u
N=len(Psi);
M =int(len(Psi)/3);
for j in range(0,N):
#We divide the integration among the three edge of the triangle
if j in range(0,M):
h = (gamma)*(1/M);
plt.scatter(TPnts2[j][0],TPnts2[j][1],c="y")
HelmholtzSLP = CoefficientFunction(0.25*1j*sp.hankel1(z=k*sqrt((TPnts2[j][0]-x)**2+(TPnts2[j][1]-y)**2),v=0))
elif j in range(2*M,N):
plt.scatter(TPnts2[j][0],TPnts2[j][1],c="g")
h = (gamma)*(1/M);
HelmholtzSLP = CoefficientFunction(0.25*1j*sp.hankel1(z=k*sqrt((TPnts2[j][0]-x)**2+(TPnts2[j][1]-y)**2),v=0))
elif j in range(M,2*M):
plt.scatter(TPnts2[j][0],TPnts2[j][1],c="r")
dt = 1#sqrt(2) #On the hypotenuse we have a factor sqrt(2) to correct.
h = (gamma/M)*dt
HelmholtzSLP = CoefficientFunction(0.25*1j*sp.hankel1(z=k*sqrt((TPnts2[j][0]-x)**2+(TPnts2[j][1]-y)**2),v=0))
u -=h*Psi[j][0]*HelmholtzSLP;
return u;
Ust = SLO(Psi)
Uin = PW(pi/3)
Utot = Ust+Uin;
Draw(Uin,mesh,'Uin')
Draw(Ust,mesh,'Ust')
Draw(Utot,mesh,'Utot')
triag = SplineGeometry()
Del = (gamma)*(2/M);
triagpnts =[(-0.2+Del,-0.2+Del),(-0.2+Del,0.4-Del),(0.4-Del,-0.2+Del)]
p1,p2,p3= [triag.AppendPoint(*pnt) for pnt in triagpnts]
triag.Append(['line', p1, p3], bc='bottom', leftdomain=1, rightdomain=0)
triag.Append(['line', p3, p2], bc='right', leftdomain=1, rightdomain=0)
triag.Append(['line', p2,p1], bc='left', leftdomain=1, rightdomain=0)
TriagMesh = Mesh(triag.GenerateMesh(maxh=0.02,quad=True))
MIP = TriagMesh(-0.2+Del,-0.2+Del);
Ust = SLO(Psi)
Uin = PW(pi/3)
u = Ust+Uin;
Draw(u,TriagMesh,'u')
print(u(MIP))
```

We now study the evolution of the quantity in the inside of the scatter knowing that inside of the scatterer is null.

```
err = [] #Init. vector of abs(Ust+Uin)
errI = [] #Init. vector of Integral Ust+Uin
H = [] #Init. of the diameter vector
IT = list(range(4,50,2));
#Computing Ust and Uin for each M.
for M in IT:
geo = SplineGeometry()
geo.AddRectangle((1,1),(-1,-1),bc="rectangle")
gamma = 0.6;
def l(x):
return -x+1;
pnts=[];
for i in range(0,M):
pnts.append((0,i/M));
for i in range(0,M):
pnts.append((i/M,l(i/M)))
for i in range(0,M):
pnts.append(((M-i)/M,0));
TPnts = [(gamma*(p[0]-(1/3)),gamma*(p[1]-(1/3)))for p in pnts]
P = [geo.AppendPoint(*pnt) for pnt in TPnts]
Curves = [[["line",P[i],P[i+1]],"l"+str(int(i))] for i in range(0,len(pnts)-1)]
Curves.append([["line",P[len(pnts)-1],P[0]],"l"+str(int(len(pnts)-1))])
pnts2=[];
for j in range(0,3*M):
if j in range(0,M):
pnts2.append((pnts[j][0],pnts[j][1]+(0.5/M)));
elif j in range(2*M,3*M):
pnts2.append((pnts[j][0]-(0.5/M),pnts[j][1]));
else:
pnts2.append((pnts[j][0]+(0.5/M),pnts[j][1]-(0.5/M)))
TPnts2 = [(gamma*(p[0]-(1/3)),gamma*(p[1]-(1/3)))for p in pnts2]
P2 = [geo.AppendPoint(*pnt) for pnt in TPnts2]
Curves = []
for i in range(0,len(pnts)-1):
Curves.append([["line",P[i],P2[i]],"m"+str(2*int(i+1)-1)])
Curves.append([["line",P2[i],P[i+1]],"m"+str(2*int(i+1))])
Curves.append([["line",P[len(pnts)-1],P2[len(pnts)-1]],"m"+str(2*(len(pnts2))-1)])
Curves.append([["line",P2[len(pnts)-1],P[0]],"m"+str(2*(len(pnts2)))])
[geo.Append(c,bc=bc) for c,bc in Curves]
mesh = Mesh(geo.GenerateMesh(maxh=0.5/M))
rho = sqrt(x**2+y**2);
theta = atan2(y,x)
N = 3*M;
A = np.zeros((N,N),complex)
F = np.zeros((N,1),complex)
for j in range(0,N):
for i in range(0,N):
HelmholtzSLP = CoefficientFunction(0.25*1j*sp.hankel1(z=k*sqrt((TPnts2[j][0]-x)**2+(TPnts2[j][1]-y)**2),v=0))
A[i,j]= Integrate(HelmholtzSLP,mesh,definedon=mesh.Boundaries("m"+str(2*(i+1)-1)),order=10)+Integrate(HelmholtzSLP,mesh,definedon=mesh.Boundaries("m"+str(2*(i+1))),order=10);
if i in range(M,2*M):
A[i,j] = (1/sqrt(2))*A[i,j];#Correction factor due to NGSOLVE integration.
mip = mesh(TPnts2[j][0],TPnts2[j][1]);
F[j] = PW(pi/3)(mip);
print("Matrix Conditioning: ", np.linalg.cond(A))
Psi = np.linalg.solve(A,F);
triag = SplineGeometry()
Del = (gamma)*(2/M);
triagpnts =[(-0.2,-0.2),(-0.2,0.4),(0.4,-0.2)]
p1,p2,p3= [triag.AppendPoint(*pnt) for pnt in triagpnts]
triag.Append(['line', p1, p3], bc='bottom', leftdomain=1, rightdomain=0)
triag.Append(['line', p3, p2], bc='right', leftdomain=1, rightdomain=0)
triag.Append(['line', p2,p1], bc='left', leftdomain=1, rightdomain=0)
TriagMesh = Mesh(triag.GenerateMesh(maxh=0.02))
MIP = TriagMesh(-0.2+Del,-0.2+Del);
Ust = SLO(Psi)
Uin = PW(pi/3)
u = Ust+Uin;
print("IT: ",M," Raggio: ",0.5/M," Sum of Ust and Uin in (0,0): ",abs(u(MIP))," Integral: ",abs(Integrate(u,TriagMesh,order=10)))
err.append(abs(u(MIP)))
errI.append(abs(Integrate(u,TriagMesh,order=10)))
H.append(0.5/M);
```

```
#Plot the graph of err vs H
plt.figure()
plt.plot(H,err,"*-")
plt.plot(H,H,"r--")
plt.title("Absolute Value Sum Of Scattered And Incoming Wave")
plt.ylabel("|Ust + Uin| in a point of the scatterer")
plt.xlabel("h")
plt.yscale("log")
plt.xscale("log")
print("Factor of convergence: ",log(err[-2]/err[-10])/log(H[-2]/H[-10]))
#Plot the graph of err vs H
plt.figure()
plt.plot(H,errI,"*-")
plt.plot(H,H,"r--")
plt.title("Absolute Value Sum Of Scattered And Incoming Wave")
plt.ylabel("Integral of Ust + Uin")
plt.xlabel("h")
plt.yscale("log")
plt.xscale("log")
print("Factor of convergence: ",log(errI[-1]/errI[-10])/log(H[-1]/H[-10]))
```

We now take a look at a different method aimed at solving the SSSP using finite element methods (FEM) rather then boundary element methods (BEM). The exterior Dirichlet problem is equivalent to the following PDE on a truncated domain :

where the DtN is the operator defined as follow.

Given a function we can decompose in circular harmonics series, i.e Fourier series:

the DtN operator acts as follow:

this can be written in variational form as follow:

as grows to infinity we can replace the previous problem with the following:

Here we have imposed some conditions that mimics the radiating condition with an impedance condition that is equivalent to the radiating condition when grows to infinity.

```
geo = SplineGeometry()
geo.AddCircle((0, 0), 1, bc="outer")
########################
# TRIANGLE SCATTERER
########################
gamma = 0.6;
def l(x):
return -x+1;
M = 10;
N = 3*M
pnts=[];
for i in range(0,M+1):
pnts.append((0,i/M));
for i in range(1,M):
pnts.append((i/M,l(i/M)))
for i in range(0,M):
pnts.append(((M-i)/M,0));
TPnts = [(gamma*(p[0]-(1/3)),gamma*(p[1]-(1/3)))for p in pnts]
P = [geo.AppendPoint(*pnt) for pnt in TPnts]
Curves = [[["line",P[i],P[i+1]],"l"+str(int(i))] for i in range(0,len(pnts)-1)]
Curves.append([["line",P[len(pnts)-1],P[0]],"l"+str(int(len(pnts)-1))])
pnts2=[];
for j in range(0,3*M):
if j in range(0,M):
pnts2.append((pnts[j][0],pnts[j][1]+(0.5/M)));
elif j in range(2*M,3*M):
pnts2.append((pnts[j][0]-(0.5/M),pnts[j][1]));
else:
pnts2.append((pnts[j][0]+(0.5/M),pnts[j][1]-(0.5/M)))
TPnts2 = [(gamma*(p[0]-(1/3)),gamma*(p[1]-(1/3)))for p in pnts2]
P2 = [geo.AppendPoint(*pnt) for pnt in TPnts2]
Curves = []
for i in range(0,len(pnts)-1):
Curves.append([["line",P[i],P2[i]],"m"+str(2*int(i+1)-1)])
Curves.append([["line",P2[i],P[i+1]],"m"+str(2*int(i+1))])
Curves.append([["line",P[len(pnts)-1],P2[len(pnts)-1]],"m"+str(2*(len(pnts2))-1)])
Curves.append([["line",P2[len(pnts)-1],P[0]],"m"+str(2*(len(pnts2)))])
[geo.Append(c,bc=bc) for c,bc in Curves]
mesh = Mesh(geo.GenerateMesh(maxh=0.5/M)) #Meshing ...
#Defining polar coordinates.
rho = sqrt(x**2+y**2);
theta = atan2(y,x)
```

```
#We here define a planar wave in the direction d=(0.5sqrt(3),0.5) with k=10.
# u(x) = exp(i(xÂ°d)) = exp(ik*rho*|d|*cos(theta-pi/6))
k=10
def PW(Angle):
pw = exp(1j*k*sqrt(x**2+y**2)*sqrt((0.5*sqrt(3))**2+0.25)*cos(theta-Angle));
return pw;
Uin = PW(pi/3)
Draw(Uin,mesh,'Uin'); #We plot Uin
#We define a H1 complex space with Dirichlet boundary condition on the circle edge.
fes = H1(mesh, order=1, complex=True,dirichlet="|".join(["m"+str(i) for i in range(1,2*N+1)]))
u, v = fes.TnT() #Trial and test functions
#We define the bilinear form.
a = BilinearForm(fes)
a += grad(u)*grad(v)*dx - k**2*u*v*dx
a += -k*1j*u*v * ds("outer") #Impedence condition
a.Assemble()
#Linear application
f = LinearForm(fes)
f += 0 * v * dx
f.Assemble()
uh = GridFunction(fes, name="uh") #Init. Ust
uBND = GridFunction(fes, name="UinBnd") #Init. Uin on the boundary
uh.Set(Uin,BND) #Setting Boundary condition
uBND.Set(Uin,BND) #Setting Boundary condition
Draw(uBND,mesh,"UinBND") #Plotting Uin on the boundary
#Solving fot Ust
r = f.vec.CreateVector()
r.data = f.vec - a.mat * uh.vec
uh.vec.data += a.mat.Inverse(freedofs=fes.FreeDofs()) * r
#Drawing Ust and Ust-Uin
Draw(uh,mesh,'uh')
Draw(uh-Uin,mesh,'utot')
```

So it’s holiday time and I find some times to write, my objective today would be to present and idea regarding how to use the first and the second Dahlquist barrier theorem to prove some theoretical limit of symplectic integrator.

In this first section we will focus our attention to the numerical method to solve the wave equation in one dimension with Cauchy initial condition, what we mean when saying this is that we are searching a function at least such that:

where , and are given and is not necessarily the derivative of . In particular we will call homogeneous wave initial value problem (IVP) the previous equation with , and we will begin by studying it.

**Note**

Before dealing directly with the wave equation is interesting to introduce briefly the characteristic method. This is a resolution technique for the first order parabolic partial differential equation (PDE), but we will see that can be applied as well to the wave equation. Let’s consider the advocation equation:

we fix the point and define the function . If we define and we can compute:

this tell’s us that the solution to the ordinary differential equation (ODE) is equivalent to solving the advocation equation. If we know that is constant and therefore which is equivalent to ask and therefore:

Now if instead considering the advocation equation alone we consider the following IVP:

we have the solution to the avocation and is given by the characteristic line equation:

this tell’s as well that the solution to the avocation equation is constant along a line, that we will call **characteristic line** and has equation , for any .

Now let’s go back to the homogeneous wave IVP, and we can observe that the one dimensional wave equation can be decomposed as follow:

and therefore we know that the solution of the wave equation is as well solution of both forward and backward advocation equation. Therefore the characteristic line of the wave equation are both and . Now our will be to compute to prove that is constantly null:

and if we substitute in the last equation we get that . Now we can integrate to obtain a nice expression for :

where and are two function. Now let’s observe that since we have that and is as well possible to write down:

therefore combining what we have just seen we get the following result:

this last equation give as an analytical solution to the wave equation and is known as the **d’Alembert formula**. Analogously a formula for the inhomogeneous wave IVP can be obtained, provided that is at least :

Let us focus our attention to the d’Alembert formula, when and are “nice” function the analytical solution can be computed easily by hand but when the function aren’t nice we can use a wide number of numerical integration technique and quadrature formula to compute the integral in d’Alembert formula. For lack of a better name we will call here this class of solvers **d’Alembert solvers**.

We will here show the result of implementing a d’Alembert solvers that computes integral in the d’Alembert formula using equi-spaced node and the trapezoid quadrature formula, but is important to notice that far better result can be obtained using higher order Newton-Cotes methods, Gaussian quadrature etc.

We will here try to explain a different approach, known with the name of **time stepping**. Let’s define an equi-spaced spatial mesh , the time stepping method consist in approximating numerically using a **stiffness matrix** , ie:

where is a discretisation of the according to the spatial approximation chosen. In this report we will only treat the finite difference as a spatial approximation method, more information regarding finite difference can be found in the second part of this report. Now once we observe that still depends upon time, ie is easy to see that the IVP equation becomes:

which is am typical numerical ODE problem of second order. One might think about splitting the second order ODE system into a system of two first order ODE to be solved coupled using method such as Euler, Crank-Nicolson, Runghe-Kutta etc etc. This certainly is a valid idea that we will explore in the third part of our report.

Our aim here will be to address the problem of solving the IVP using second order ODE integration technique. In particular this section focuses on the Leapfrog integration. Let’s introduce to begin with an equi-spaced time mesh of time step and adopt the following notation , to define an other mesh of step such that . Now we are ready to define the Leapfrog integration scheme:

If we locate the Leapfrog integration on the mesh by observing that since then , and that the following Taylor expansions holds:

we get the following equation to describe the Leapfrog integration:

Our aim is here to study the stability of the Leapfrog integration using a the theory developed in B1. Our aim will be to see Leapfrog integration when solving the second order differential equation:

as way of solving the system of differential equations:

where . We know that equation that characterize the leapfrog method, and we can see the second equation as being the one solving the second equation of the first order formulation while the third equation solves the first equation of the first order formulation.

In particular is important to notice that while the second equation of the first order formulation dosen’t present particular problems the first one can be reformulated to obtain component by component a set of differential equation of the form .

The idea behind this that if we suppose our stiffness matrix is diagonalizable we can rewrite as:

where is the digitalization of by the matrix , and this means that we can study instead of system of equation that describe the first order formulation the following formulation:

now we rewrite the Leapfrog scheme equations as a linear multi step method (more information can be found in B1):

supposing that the eigenvalues of are negative we get that both the linear multi step method respect the roots criteria, since they have characteristic polynomials:

and both polynomials have roots in the unit disk.

Therefore the Leapfrog scheme is 0-stable but what we are interested to check now is if it is as well absolutely stable, as defined in B1 which is different from being -stable, as defined by Hairer-Lubich and as well in B1.

To study the Leapfrog scheme we will notice that the first equation of that describe the Leapfrog scheme can be obtained by applying the explicit Euler scheme to the problem:

and under suitable hypothesis on , which we are not going to present in detail here, we need for the solution not to explode with that:

**Note**

Last but not least we would like to observe that if as stiffness the finite difference matrix in 1D (more information can be found in the second part of this report) we know that its eigenvalues are:

is clear that is in the negative portion of the complex plane and the second requirement mentioned in the definition of -stability is tight if which is equivalent to ask:

which turns out to be the Courantâ€“Friedrichsâ€“Lewy condition.

In this section our aim is to present a wide class of methods, among which we could find as well the Leapfrog integration scheme. The class of methods here presented is known as sympletic integrator and have the peculiar property of conserving some sort of numerical energy.

If we image that describes the motion of particles in space, we introduce the Hamiltonian of this system and Hamilton’s equations:

now we would like to integrate numerically the above mentioned system of equation, this means that the numerical solution of the above system will be associated to an perturbed Hamiltonian, which is conserved since solution to the Hamilton equations conserves .

We will here present the same approach as in B2, which is base upon the idea that we can split the Hamiltonian as follow:

now we introduce the vector and observe that Hamilton equations can be rewritten as:

where are the Poisson brackets. We now introduce a notation useful in future:

rewriting those equation as we can easily find a solution to the ODE using the exponential matrix, since which is a property that comes directly from the assumption that :

we now wont to approximate the above mentioned operator as follow:

if the problem is quick to solve and painless , for the coefficients we are searching for are and . In particular in B3 solutions until are presented, while in B2 the Baker-Campbell-Hausdorff formula is used to a compute and for any symmetric symplectic integrator.

If we perform a Taylor expansion of the exponentials involved in the previous equations we get that:

and since:

we end up with the following equations:

now if we substitute this recursively in the previous equations we get:

which in Lagrangian coordinates becomes:

and those equations defines recursively a symplectic integrator. For example if we have as coefficents we have symplectic Euler integration which is defined by the following equations:

Instead if we choose as coefficients we end up with the Verlet integration scheme, which is defined by the following equations:

Last not but least we find out the Leapfrog integration if we choose .

We will here call symplectic integrator any integrator built with the technique presented, that can be found as well in B2 and B3.

Using the same argument proposed in the previous section, if we rewrite a symplectic scheme as a linear multi step scheme is possible to use the Dahlquist’s second barrier we can find out that no symplectic method can be -stable or -stable.

In particular we now know that any symplectic integrator as a multi step linear method, therefore we can use some of the instruments developed for the stability of multi step linear method with regard to symplectic integrator. The instruments we are speaking about are the first and the second Dahlquist Barrier:

**First Dahlquist Barrier** There are no multi step linear method, consisting of step which have a order of convergence if is odd and of if is even.

**Second Dahlquist Barrier** There are no explicit multi step linear method which are -stable or -stable. Further more there is no multi step linear method which is -stable and has order of convergence greater then .

In particular we’d like to formulate the following theorem, that will be based upon the first and the second Dahlquist barrier.

**Theorem** There are no symplectic integrator obtained by iteration of the Negri technique that have a order of convergence if is odd and of if is even. Further more there is no symplectic integrator that is -stable and has order of convergence greater then .

**Proof** We will show here just a sketch of the proof that is based on what we have seen previously:

- We reduce a symplectic integrator to a system of equations representing a multi step linear method, as we have done for the Leapfrog integration.
- We are justified to study equation by equation the stability of the symplectic integrator, as we proven in the Leapfrog Integration paragraph.
- We apply the second and the first Dahlquist barrier results to each equation representing a multi step linear method to prove the thesis above mentioned.

B1. Quarteroni, Alfio, Riccardo Sacco, and Fausto Saleri. *Matematica numerica*. Springer Science & Business Media, 2008.

B2. Yoshida, Haruo. “Construction of higher order symplectic integrators.” *Physics letters A* 150.5-7 (1990): 262-268.

B3. Negri, F. “Lie algebras and canonical integration.” *Department of Physics, University of Maryland, prepffnt* (1988).

Suppose that we have a symmetric matrix and our aim is to solve the linear problem to find the exact solution . One of the algorithm widely implemented to perform such task is the conjugated gradient (CG) method. The standard formulation of the CG method is as follow:

- We define the initial step: .
- We compute the length of our next step: .
- We compute the approximated solution: .
- We compute the residual:
- Last but not least we update the search direction:

the idea behind this method is to find the best the approximation in the Krylov subspace of order , , that minimize the quantity: . A couple of interesting result for the CG method are the following:

**Theorem** **1**

The following statements holds when the CG method is applied to a symmetric matrix:

- .
- The residual are orthogonal: .
- The search directions are conjugated:

**Theorem 2**

If we apply the CG method to a **positive defined** symmetric matrix the element that minimize is unique and the converge with respect to:

is monotonic.

More detail on the CG method and the theorems before mentioned can be found in *Numerical Linear Algebra, Lloyd N. Trefethen*. An interesting aspect is that even if Theorem 2 states that the error of the CG in the norm induced by is monotonically decreasing, we can’t state the same for the residual . In the next sections we will try to use techniques developed in the field of optimal stopping time to address the question of when is convenient to arrest our method in order to minimize the residual.

Let’s begin by addressing a very simple problem, suppose that we can compute steps of the CG and we have reached the step . How can we decide if would be convenient in terms of residual to compute the -th step ? Well a straight forward answer would be that we should compute if:

where is the random variable whose realization is and we can use any norm of our choosing, with out altering much the meaning of the equation. Considering the fact that each step of the CG method is computed only considering thee previous we can assume that is a Markov process. In light of the fact that is Markov chain we can replace as decision criteria with:

The idea of introducing is justified by the fact that the value that of is unknown to the user at time even if it can be exactly computed at the -th step. The idea is for the user to make and “educated guess” of the out come for the CG method at step , and to decide thanks to this educated guess if proceed to compute . The “education” of our guess consist in the distribution that we assume has. We know that the CG method moves along a direction of length to minimize and we need to find a distribution that correctly represent this behavior. To do so we start from a multivariate Gaussian in centered in :

Then we puncture the Gaussian propagating it using the wave equation:

the solution of the above problem that we will denote as still is a probability density function under certain hypothesis that we will investigate later. Furthermore it has the shape of a punctured Gaussian. Such a shape tell us that we have greater probability of finding around the -dimensional hole produced around . In particular since we known that the wave front propagate as sphere of radios in the particular case of Gaussian pulses, the density seems a fear distribution for given that we have . We should remark latter on how spectral properties of together with the choice of can improve the way we build our distribution, we will remark in the same section that we can provide a general formula to compute . Now if we decide to use the euclidean norm to evaluate our residual we have that is equivalent to:

Equation tell us when is convenient to compute according to our `educated`

guess.

Here we wont to extend the explanation to why the distribution was chosen together with some general remarks regarding it. We will start from a two dimensional view to ease our minds, or at least mine. We can image the CG method as moving from to along the direction with length . So we can start assuming that is distributed around with radius . But clearly we wont the density function of to be null near since we know we are moving away from it. To achieve this result we began from a Gaussian centered in :

then we propagate this distribution using the wave equation. In particular we are interested in finding the solution to the problem:

**Figure 1**, In figure is possible to observe how the wave equation propagates a Gaussian pulse.**Figure 1**, In figure is possible to observe how the wave equation propagates a Gaussian pulse.**Figure 1**, In figure is possible to observe how the wave equation propagates a Gaussian pulse.

We can see from Figure 1 that this procedure creates a hole in our normal distribution, and this hole expands with time. Furthermore if we consider the Green function associated with this problem:

where is the Heaviside function. We can easily see that the radius of the ”hole“ produced propagating using the wave equation is since:

We can as well compute explicitly the solution to the wave equation:

we consider only the absolute value part of this function to obtain a function that is Lebesgue integrable and non-negative, ie it is a probability density function if we normalize. In particular we will call the probability density function obtained this way. We can do the same procedure for a Gaussian of , and we can compute analiticaly the solution to the wave propagation, as in *Wave Equation in Higher Dimensions, Lecture Notes Maths 220A Stanford*, to see that we have a Lebsgue integrable. We will call the pdf obtained by the normalization of the absolute part of the previously mentioned solution with , starting from a Gaussian centered in . Furthermore even in higher dimension we know that the wave front propagates at a distance from the mean of the Gaussian, and so we will have an hole of radius , produced in the center of the Gaussian.

Last but not least since we know the direction of search we can choose such that it has eigenvector and that this eigenvector is associated with the greatest eigenvalue of the matrix. This produced the Gaussian shown in Figure 2.

**Figure 2**, In figure is possible to observe a Gaussian that propagates in the direction of the CG search direction.**Figure 2**, In figure is possible to observe a Gaussian that propagates in the direction of the CG search direction.**Figure 2**, In figure is possible to observe a Gaussian that propagates in the direction of the CG search direction.

The last notation we will adopt is to indicate the probability density function obtained using the direction of search as above, of radius but starting from a Gaussian centered in , propagated until .

We will here address the problem of finding the optimal stopping time for the CG method within a finite horizon .

Let’s consider again the Markov chain , we can suppose as we have done in our preliminary example that that . Here we have to deal with a time in-homogeneous therefore to apply the result developed in the book *Optimal Stopping and Free-Boundary Problems, Peskir, Goran, Shiryaev, Albert N*, we need to introduce a time homogeneous Markov chain: . Now to find the optimal stopping time for the CG method we will introduce a week version of the result presented in *Optimal Stopping and Free-Boundary Problems, Peskir, Goran, Shiryaev, Albert N*.

**Theorem 3**

Let’s consider the optimal stopping time problem:

Assuming that , where is our gain function, then the following statements holds:

- The Wald-Bellaman equation can be used to compute: for .
- The optimal stopping can be computed as: , where .

Then we can define the transition operator as follow:

In this case our gain function will be defined as follow:

which respect the hypothesis of the previous theorem since the greatest the gain function could get is . Our transition operator becomes:

In particular we can use Wald-Bellman equation to compute with a technique known as backward propagation (this is a common practice in dynamical programming):

and this allows to build the set , which inf is the optimal stopping time we were searching fore.

We have here shown two ideas, the first one is presented in section 2 and explain how to decide at step if is convenient to compute , the second one provide a technique to compute the optimal stopping time for the CG method. Those idea are just an interesting exercise on the optimal stopping time and a probabilistic approach to the arrest criteria for numerical linear algebra. This because to compute the density and we need to have and that are the most complex operation in the CG method, .

An interesting approach to make the second idea computationally worth while would be the one of using a low-rank approximation of order . In this way the complexity to compute the approximation of and is only .

Last but not least is important to mention that if we would like to implement the afore mentioned ideas we could use a Monte Carlo integration technique to evaluate the integrals in equation and and a multi dimensional root-finding algorithm to find compute .

Interesting aspect that are worth investing are how to build the best possible low-rank approximation and if theory such as the one of generalized Toepliz sequences can give us useful information to build our density function in a more efficient way.