I’m following a very interesting course at KAUST titled Applied Mathematics. In this course some beautiful ideas are introduced such as ODE, variation of parameters, Greens function, separation of variable for ODE, PDE and eigenvalue problems etc. I had previously seen some of those concepts in some courses I attended at University of Pavia, in particular in the courses of Analysis 2, Analysis 3 and Equations of Mathematical Physics held respectively by Professor Gilardi, Professor Schimperna and Professor Toscani. In each of this courses ideas related to the one presented in the class I’m now following were introduced, so I thought that would be a nice idea to collect some of the ideas presented in all those courses as a series of posts. In particular I hope this might be a nice way to review the classwork materials and to get a broder prospective of the subject.
Ordinary Differential Equation
The first concept I like to introduce is the one of ordinary differential equation (ODE), in particular given a function we search for a function such that the following equation is verified:
where is the -th derivative of the function . We say that the ODE is expressed in general form if is written as in (1) while we state that the ODE is expressed in normal form if is written as:
Remark Let me make some remarks, first of all the functions are usually defined on instead since derivatives of a functions (in particular when we work with multivariable functions) are defined with meaning on an open set. But we notice that for the derivative to exist we need for both and to be defined as now we can image this as the composition of the function with and . For this to make sense we notice that needs to be an accumulation point for both and (defined with respect to ), and this is the case for and for even if is equal to or . Therefore we can actually work with functions defined on rather then .
Remark My second remark would be that since we are performing n differentiation on the function we need .
The general idea behind an ODE is to determine the function given how it varis with respect to its indepepdnet variable, here called .
We will begin our discussion studying a very simple ODE, i.e.
this is classified as a first-order autonomous linear differential equation. We say that the equation is of first-order since in (3) we only have the first derivative of . Furthermore, we say that (3) is autonomous because there is only explicit dependence to and its derivative. Last but not least we say that (3) is linear if it exists a sequence of functions and such that:
What happens if we do an other ansatz, i.e. ?
In general we find out that for any the function is a solution of (3). This is the concept of general solution: “Given a generic ODE such as (1) it has infinite solutions that are obtained varying constant where is the order of the ODE.”. We are interested in solving (1) for a unique solution, in particular we will focus our attention to first-order ODE. If we go back to the general solution of (3) and we fix the value of at then we can find a value for , i.e.
Definition – Initial Value Problem We will call the couple made by a first order ODE and the requirement that the solution has a certain value for a given an initial value problem (IVP). In the case of (3) initial value problem has shape:
(4)This definition of the IVP makes sense sicne is defined also on and in virtue of one of the previous remarks.
Example – Growth and Decay An initial value problem as the one defined above models a phenomena involving a population growth or decay. In particular let be the size of a bacterial population on a Petri dish at time and the initial population size, we will work under the assumption that the bacterial population has constant rate of change . Then the IVP becomes,
in particular if then the IVP models the growth of the bacterial population while if the IVP models the decay of the bacterial population.
A Bit Of Theory
I’d like to introduce in this section a bit of theory regarding uniqueness and existence of a solution to an IVP. The usual approach would be to introduce a solution operator thanks to the Volterra’s lemma and prove that the operator is a contraction, then we use the contraction theorem to produce a “small” solution, later on, small solutions are used to create a maximal solution which is extended on . Now, this approach requires some knowledge of Hilbert space and linear operators. For this reason, I’d like here to present a more constructive proof for the existence and uniqueness result, this is the approach presented by Professor Gilardi in “Analisi matematica di base”. We begin introducing Volterra’s lemma.
Lemma – Volterra Let , be a continuous function, then it exists a C^1 function that is a solution to (4), in particular is given by:
Proof Let be a solution to (4), then thanks to the fundamental theorem of calculus:
(8)Viceversa let be defined as in (6), then by the theorem of the derivative of the integral function we have:
(9)this means that since is continious then is . Furthermore if we take equal to 0 from (6) we obtain .
Another lemma that we will use to prove the uniqueness of the solution to the IVP is the Gronwall’s lemma.
(10)then for all .
Proof If there if nothing to prove, so we only consider the case . We observe that from (10) we obtain:
Therefore integrating we get the following inequality,
we multiply by to obtain
this combined with (10) yield for all . We are now ready to prove the theorem of existence and uniquness of a “big” solution.
Theorem – Existence and Uniqueness of “Big” Solution to the IVP
Let and a continues function such that it exist that verify,
then for all it exist one and only one function that verify the following IVP:
Proof Thanks to Volterra’s Lemma we just need to prove that it exist a unique solution to the integral equation (6). We begin proving the uniqueness, in particular we assume are solution to the integral equation (6). We subtract and and use the integral equation,
using the Lipschitz continuity in the second variable we get that,
now we observe that the function defined by verifies the inequality, and therefore using the Gronwall Lemma we have since .
Now we prove the existence of a solution to the integral equation, to do this we build a sequence of functions as follow,
where for all . Since is well defined the integral in the above expression is well defined, and this imply that is well defined and there fore even for the integral in the above expression is well defined, and so on. Now we prove the following points:
- We prove that the sequence is convergent, in particular we define: .
- The function defined above is continious.
- Last the function defined above, is the solution of (6).
To prove point 1, we observe that we can define as follow:
where . This means that we can limit our self to prove the convergence of the series . We use Liptshiz continuity in the second variable and the definition of to show that,
We fix and we limit our proof to the which is not restrictive since can be chosen freely. Now is a functions on a compact set, therefore for all , substituitng it in (25) we obtain:
We observe that thanks to the criteria of absolute convergence if converges also converges, and since:
the series converges, and we can define the functions as follow:
Now we proceed to prove point 2, to do this we will focus once again our attention on the domain with . We notice that by the definition of we have:
using the (26) we have that following estimate for the error,
we notice that tends to zero, and therefore we can prove easily the continuity of . In particular we can fix and find an such that , now we fix and notice that,
since is continuous by hypothesis also is continuous.
We are left to check point 4, i.e. verify the (6), once again we focus our attention on the domain where can be fixed freely. We define the difference between and the solution to the Volterra equation:
since we already notice that for is clear that for all .
Definition – Separable ODE We say that an ordinary differential equation is separable if when it is expressed in normal form is written as:
where h(y) and g(y) are two functions defined on the domain of the ODE, this hypothesis grants us the existence of antiderivative for both h(y) and g(y) more detail can be found in G. Gilardi Analisi Matematica di Base.
The solution of a separable ODE in theory is an easy matter, in fact from (36) we obtain,
integrating both sides and simplifying we obtain:
now if we carry out the integration in (38) we obtain a family of implicit solutions of the form:
where and are respectively the antiderivative of and .
In case the separable ODE is part of an initial value problem then the constant can be found thanks to the condition of the initial value problem.
Since is the first time I speak about implicit solutions I’d like to show an example of the separation of variable technique that I borrowed from the book Advanced Engineering Mathematics by D. G. Zill.
Example Let us consider the following initial value problem,
we can solve this by separation of variable, in fact using (38) we obtain:
This tell us that implicit solutions to the ODE are circles centered in the origin of radius . As mentioned before the second condition of the IVP is used to compute the value of the constant C,
So the solution of the IVP is a circle centered at the origin and of radius .
Remark What happens if takes value at the point ? Well the integrals in equation (38) would lose meaning because the integrand is not defined at the point . Now we notice that if then and therefore is a memeber of the family of implicit solutions of the ODE.
Example – Logistic Growth Model Another nice example of the technique of separation of variables is the logistic growth model. This ODE models the growth of a population (such as a Bacteria population on a Petri dish) under the following assumptions:
- If the population is small the growth rate is linear with respect to the population size. In particular, we assume the coefficient of linearity is .
- If the population large enough the growth rate becomes negative, in particular, this occurs when the population size is greater then .
We usually assume is equal to , in this way represent the portion of the ideal maximum population. Under this assumption we get the following ODE,
we split on the two sides and divide by to obtain the equation,
we observe that and intgrate both sides:
I’d like now to introduce a very nice tool to solve first order ODE, this method is named the integrating factor method. Let us consider a first order ODE written in normal form,
now we introduce the integrating factor associated to the ODE, defined as follow:
we multiply both sides by , split and then integrate,
this lead to the solution, which is given by:
We can also notice that differentiating and using the product rule on (50), i.e. , we obtain the following equation:
if we divide by we obtain the original ordinary differential equation,
Example We now consider the following ordinary differential equations,
to solve this ODE we use the integrating factor method, we begin by computing the integrating factor:
now we multiply the ODE by the integrating factor to obtain an easy to solve ODE,
now we integrate both sides, using to find an explicit expression for :
The above example is borrowed from the book Advanced Engineering Mathematics by D. G. Zill.
In this section I’d like to introduce a particular class of first order differential equation, exact ODE.
Definition – Exact ODE Given two functions , the differential equation represented by the differential form,
is said to be an exact ODE if and only if it exist a function such that:
Remark The differential form is associated with the ordinary differential equation , that can be expressed in standard form as:
There is an easy criterion to check if an ODE is exact, and this is based on Schwarz theorem, which states that for a function . Therefore given an exact ODE, we have that the following identities must be verified:
Exact ODE can be solved easily thanks to the following observations. We observe that since and , therefore:
now we are left with integrating h'(x), to find an implicit solution to the ordinary differential equation.
Example In this example I’d like to solve the following ODE,
so we multiply by and then split to obtain a differential form,
now we recognize that this is an exact differential equation,
This tell us that it exist a function such that and therefore,
now we notice that since and therefore,
This tell us that the solutions of the ODE are given by the implicit equations for any ,
In the next post, I’d like to introduce some topics related to the qualitative study of dynamical systems, such as: source, sinks, phase-line. Further, I’d like to introduce notions related to the asymptotic behaviour of a dynamical system, such as the idea of stable solutions. Last studying some modification to the logistic model equation I’d like to introduce the concept of bifurcations and Poincare mapping.