Applied Mathematics – Introduction

I’m following a very interesting course at KAUST titled Applied Mathematics. In this course some beautiful ideas are introduced such as ODE, variation of parameters, Greens function, separation of variable for ODE, PDE and eigenvalue problems etc. I had previously seen some of those concepts in some courses I attended at University of Pavia, in particular in the courses of Analysis 2, Analysis 3 and Equations of Mathematical Physics held respectively by Professor Gilardi, Professor Schimperna and Professor Toscani. In each of this courses ideas related to the one presented in the class I’m now following were introduced, so I thought that would be a nice idea to collect some of the ideas presented in all those courses as a series of posts. In particular I hope this might be a nice way to review the classwork materials and to get a broder prospective of the subject.

Ordinary Differential Equation

The first concept I like to introduce is the one of ordinary differential equation (ODE), in particular given a function f:(a,b)\times \mathbb{R}^n \to \mathbb{R} we search for a function y:(a,b)\to\mathbb{R} such that the following equation is verified:

(1)   \begin{equation*} f(t,y,y^{(1)},\dots,y^{(n)}) = 0  \end{equation*}

where y^{(k)}:(a,b)\to\mathbb{R} is the k-th derivative of the function y(t). We say that the ODE is expressed in general form if is written as in (1) while we state that the ODE is expressed in normal form if is written as:

(2)   \begin{equation*} y^{(n)} = f(t,y,\dots, y^{(n-1)}) \end{equation*}

Remark Let me make some remarks, first of all the functions are usually defined on (a,b) instead [a,b] since derivatives of a functions (in particular when we work with multivariable functions) are defined with meaning on an open set. But we notice that for the derivative to exist we need for both \underset{h\to 0}{lim}\frac{1}{h}f(x_0+h) and \underset{h\to 0}{lim}\frac{1}{h}f(x_0) to be defined as h\to 0 now we can image this as the composition of the function g(h) = \frac{1}{h} with f(h) = f(x_0+h) and f(x_0) . For this to make sense we notice that h needs to be an accumulation point for both g and f (defined with respect to h), and this is the case for g and for f even if x_0 is equal to a or b. Therefore we can actually work with functions defined on [a,b] rather then (a,b).

Remark My second remark would be that since we are performing n differentiation on the y function we need y\in C^n\bigg( (a,b),\mathbb{R} \bigg).

The general idea behind an ODE is to determine the function y given how it varis with respect to its indepepdnet variable, here called t.

We will begin our discussion studying a very simple ODE, i.e.

(3)   \begin{equation*} y'(t) = ay(t)  \end{equation*}

this is classified as a first-order autonomous linear differential equation. We say that the equation is of first-order since in (3) we only have the first derivative of y. Furthermore, we say that (3) is autonomous because there is only explicit dependence to y(t) and its derivative. Last but not least we say that (3) is linear if it exists a sequence of functions a_n:(a,b)\to \mathbb{R} and b:(a,b)\to \mathbb{R} such that:

    \[f(t,y,y^{(1)},\dots,y^{(n)}) = b(t) + \sum_{k=0}^n a_k(t)y^{(k)}(t) \label{ode}\]

We solve (3) beginning with an ansatz, i.e. y(t) = e^{at} and we check that y(t) verifies (3):

    \[y'(t) = \frac{d}{dt}\bigg( e^{at} \bigg) = ae^{at} = ay(t)\]

What happens if we do an other ansatz, i.e. y(t) = 2e^{at} ?

    \[y'(t) =\frac{d}{dt}\bigg( 2e^{at} \bigg) = 2ae^{at} = ay(t)\]

In general we find out that for any C\in\mathbb{R} the function y(t)=Ce^{at} is a solution of (3). This is the concept of general solution: “Given a generic ODE such as (1) it has infinite solutions that are obtained varying n constant where n is the order of the ODE.”. We are interested in solving (1) for a unique solution, in particular we will focus our attention to first-order ODE. If we go back to the general solution of (3) and we fix the value of y at t=0 then we can find a value for C, i.e.

    \[y(t) = y(0)e^{at}\]

Definition – Initial Value Problem We will call the couple made by a first order ODE and the requirement that the solution has a certain value for a given t_0 an initial value problem (IVP). In the case of (3) initial value problem has shape:

(4)   \begin{equation*}\begin{cases} y'(t) = f(t,y(t)) \\ y(0) = C \end{cases}\end{equation*}

This definition of the IVP makes sense sicne y(t) is defined also on a and b in virtue of one of the previous remarks.

Example – Growth and Decay An initial value problem as the one defined above models a phenomena involving a population growth or decay. In particular let P(t) be the size of a bacterial population on a Petri dish at time t and P_0 the initial population size, we will work under the assumption that the bacterial population has constant rate of change k. Then the IVP becomes,

(5)   \begin{equation*}\begin{cases}\frac{dP}{dt} = kP(t) \\ P(0)=P_0\end{cases}\end{equation*}

in particular if k>0 then the IVP models the growth of the bacterial population while if k<0 the IVP models the decay of the bacterial population.

Rendered by QuickLaTeX.com

Rendered by QuickLaTeX.com

A Bit Of Theory

I’d like to introduce in this section a bit of theory regarding uniqueness and existence of a solution to an IVP. The usual approach would be to introduce a solution operator thanks to the Volterra’s lemma and prove that the operator is a contraction, then we use the contraction theorem to produce a “small” solution, later on, small solutions are used to create a maximal solution which is extended on [a,b]. Now, this approach requires some knowledge of Hilbert space and linear operators. For this reason, I’d like here to present a more constructive proof for the existence and uniqueness result, this is the approach presented by Professor Gilardi in “Analisi matematica di base”. We begin introducing Volterra’s lemma.

Lemma – Volterra Let T\in (0,+\infty], f:[0,T)\times \mathbb{R}\to\mathbb{R} be a continuous function, then it exists a C^1 function y:[a,b]\to\mathbb{R} that is a solution to (4), in particular y is given by:

(6)   \begin{equation*} y(t) = y(0) + \int_{0}^{t} f(\xi,y(\xi)) d\xi. \end{equation*}

Proof Let y be a C^1 solution to (4), then thanks to the fundamental theorem of calculus:

(7)   \begin{equation*} y(t) - y(0) = \int_{0}^{t} y'(\xi) d\xi = \int_{0}^{t} f(\xi,y(\xi)) d\xi \end{equation*}

and therefore:

(8)   \begin{equation*} y(t) = C + \int_{0}^{t} f(\xi,y(\xi)) d\xi. \end{equation*}

Viceversa let y be defined as in (6), then by the theorem of the derivative of the integral function we have:

(9)   \begin{equation*} y'(t) = \frac{d}{dt} \int_{0}^t f(\xi,y(\xi)) d\xi = f(t,y(t))\end{equation*}

this means that since f is continious then y is C^1. Furthermore if we take t equal to 0 from (6) we obtain y(0)=C.

Another lemma that we will use to prove the uniqueness of the solution to the IVP is the Gronwall’s lemma.

Lemma – Gronwall Given a continuous function \omega: [0,T) \to \mathbb{R} if exists L,M \geq 0 such that

(10)   \begin{equation*} \omega(t) \leq M + L \int^{t}_{0} \omega(\xi) d\xi  \end{equation*}

then \omega(t) \leq Me^{Lt} for all t\in [0,T).

Proof If L=0 there if nothing to prove, so we only consider the case L>0. We observe that from (10) we obtain:

(11)   \begin{equation*} \frac{d}{dt}\bigg( e^{-Lt}\int_0^t \omega(\xi)d\xi\bigg) = -Le^{-Lt}\int_0^t \omega(\xi)d\xi + e^{-Lt}\omega(t) \end{equation*}

(12)   \begin{equation*} = e^{-Lt} \bigg( \omega(t) - L\int^t_{0} \omega(\xi)d\xi ) \leq Me^{-Lt} \end{equation*}

Therefore integrating we get the following inequality,

(13)   \begin{equation*} e^{-Lt} \int_0^t \omega(\xi) d\xi \leq \bigg|^{t}_0\frac{M}{L} e^{-Lt} =-\frac{M}{L} + \frac{M}{L}e^{-Lt} \end{equation*}

we multiply by \frac{L}{e^{-Lt}} to obtain

(14)   \begin{equation*} M+L\int^t_0 \omega(\xi) d\xi \leq  Me^{-Lt}\end{equation*}

this combined with (10) yield \omega(t) \leq Me^{Lt} for all t\in [0,T). We are now ready to prove the theorem of existence and uniquness of a “big” solution.

Theorem – Existence and Uniqueness of “Big” Solution to the IVP

Let T\in (0,+\infty] and f:[0,T)\times \mathbb{R}\to \mathbb{R} a continues function such that it exist L>0 that verify,

(15)   \begin{equation*} |f(t,y)-f(t,z)| \leq L|y-z| \qquad \forall t \in [0,T) \; and \; \forall y,z \in \mathbb{R} \end{equation*}

then for all u_0\in \mathbb{R} it exist one and only one C^1 function u:[0,T)\to \mathbb{R} that verify the following IVP:

(16)   \begin{equation*}\begin{cases}u'(t)=f(t,u(t))\qquad \forall t \in [0,T) \\ u(0)=u_0\end{cases}\end{equation*}

Proof Thanks to Volterra’s Lemma we just need to prove that it exist a unique solution to the integral equation (6). We begin proving the uniqueness, in particular we assume u_{1,2} (t):[0,T)\to\mathbb{R} are solution to the integral equation (6). We subtract u_1 and u_2 and use the integral equation,

(17)   \begin{equation*}|u_1(t)-u_2(t)|=\Bigg\vline \int_0^t f(\xi,u_1(\xi)) d\xi - \int_0^t f(\xi,u_2(\xi))\;d\xi\;\Bigg\vline  \end{equation*}

(18)   \begin{equation*}\Bigg\vline \int_0^t f(\xi,u_1(\xi)) d\xi - \int_0^t f(\xi,u_2(\xi)) \; d\xi\;\Bigg\vline \leq \; \int_0^t |f(\xi,u_1(\xi))-f(\xi,u_2(\xi))|\; d\xi \end{equation*}

using the Lipschitz continuity in the second variable we get that,

(19)   \begin{equation*}|u_1(t)-u_2(t)|\leq L\int^t_0 |u_1(\xi)-u_2(\xi)|\; d\xi \end{equation*}

now we observe that the function defined by \delta = u_1-u_2 verifies the inequality, |\delta(t)|\leq L\int_0^t |u(\xi)|\;d\xi and therefore using the Gronwall Lemma we have |\delta(t)| \equiv 0 since M=0.

Now we prove the existence of a solution to the integral equation, to do this we build a sequence of functions \{u_n\}_{n\in \mathbb{N}} as follow,

(20)   \begin{equation*} u_{n+1}(t) = u_0 + \int_0^t f(\xi,u_n(\xi))\;d\xi \qquad \forall t \in [0,T)\end{equation*}

where u_0(t) = u_0 for all t\in [0,T). Since u_0 is well defined the integral in the above expression is well defined, and this imply that u_1 is well defined and there fore even for u_2 the integral in the above expression is well defined, and so on. Now we prove the following points:

  1. We prove that \forall t \in [0,T) the sequence \{u_n(t)\}_{n\in \mathbb{N}} is convergent, in particular we define: u(t) = \underset{n\to \infty}{lim} u_n(t).
  2. The function u:[0,T) \to \mathbb{R} defined above is continious.
  3. Last the function u:[0,t) \to \mathbb{R} defined above, is the solution of (6).

To prove point 1, we observe that we can define u_n(t):[0,T)\to \mathbb{R} as follow:

(21)   \begin{equation*} u_n(t) = u_0 + \sum_{k=0}^{n-1} \omega_k(t) \qquad \forall t \in [0,T) \end{equation*}

where \omega_k(t)= u_{k+1}(t)-u_{k}(t). This means that we can limit our self to prove the convergence of the series \sum_{k=0}^{n-1} \omega_k(t). We use Liptshiz continuity in the second variable and the definition of \omega_{k+1}(t) to show that,

(22)   \begin{equation*}|\omega_{k+1}(t)| = |u_{k+2}(t)-u_{k+1}(t)|=\Bigg\vline \int_0^t f(\xi,u_{k+1}(\xi))-f(\xi,u_{k}(\xi))d\xi \;\Bigg\vline \end{equation*}

(23)   \begin{equation*}\Bigg\vline \int_0^t f(\xi,u_{k+1}(\xi))-f(\xi,u_{k}(\xi))d\xi \;\Bigg\vline \leq  \int_0^t \Big\vline f(\xi,u_{k+1}(\xi))-f(\xi,u_{k}(\xi))\Big\vline\;d\xi\end{equation*}

(24)   \begin{equation*}\int_0^t \Big\vline f(\xi,u_{k+1}(\xi))-f(\xi,u_{k}(\xi))\Big\vline\;d\xi\leq L\int_0^t |u_{k+1}(\zi)-u_{k}(\xi)|\;d\xi \leq L\int_0^t |\omega_{k}(\xi)|\;d\xi\end{equation*}

(25)   \begin{equation*} |\omega_{k+1}(t)| \leq L\int_0^t |\omega_{k}(\xi)|\;d\xi \end{equation*}

We fix a\in(0,T) and we limit our proof to the t\in [0,a] which is not restrictive since a can be chosen freely. Now \omega_0(t) is a C^0 functions on a compact set, therefore |\omega_o(t)|<M for all t\in [0,a], substituitng it in (25) we obtain:

(26)   \begin{equation*} |\omega_n(t)| \leq M\frac{(Lt)^n}{n!}\leq M\frac{La}{n!} \qquad \forall t \in [0,a]  \end{equation*}

We observe that thanks to the criteria of absolute convergence if \sum_{k=0}^{n-1} |\omega_k(t)| converges also \sum_{k=0}^{n-1} \omega_k(t) converges, and since:

(27)   \begin{equation*}\sum_{k=0}^{n-1} |\omega_k(t)|\leq \sum_{k=0}^{n-1} M\frac{(La)^k}{k!}\leq Me^{La}\end{equation*}

the series converges, and we can define the functions u:[0,T)\to\mathbb{R} as follow:

(28)   \begin{equation*} u(t) = \underset{n\to\infty}{lim} u_n(t) = u_0 + \sum_{k=0}^{n-1} \omega_k(t) \end{equation*}

Now we proceed to prove point 2, to do this we will focus once again our attention on the domain [0,a] with a\in [0,T). We notice that by the definition of u:[0,a]\to \mathbb{R} we have:

(29)   \begin{equation*} u(t)-u_n(t) = \sum_{k=n}^{\infty} \omega_k(t) \end{equation*}

using the (26) we have that following estimate for the error,

(30)   \begin{equation*}|u(t)-u_n(t)|\leq e_n = M\sum_{k=n}^\infty \frac{(La)^k}{k!} \end{equation*}

we notice that \{e_n\} tends to zero, and therefore we can prove easily the continuity of u(t):[0,a]\to\mathbb{R}. In particular we can fix \varepsilon > 0 and find an n^* such that |e_{n^*}|\leq \varepsilon, now we fix t,s \in [0,a] and notice that,

(31)   \begin{equation*} |u(t) - u(s) | \leq |u(t)-u_{n^*}(t)| + |u_{n^*}(t) - u_{n^*}(s)| + |u_{n^*}(s) - u(s)| \end{equation*}

(32)   \begin{equation*}|u(t) - u(s) | \leq 2|e_{n^*}| + |u_{n^*}(t) - u_{n^*}(s)| \leq 2\varepsilon + |u_{n^*}(t) - u_{n^*}(s)| \end{equation*}

since u_n(t):[0,a]\to\mathbb{R} is continuous by hypothesis also u(t):[0,a]\to\mathbb{R} is continuous.

We are left to check point 4, i.e. ut(t):[0,T)\to\mathbb{R} verify the (6), once again we focus our attention on the domain [0,a] where a can be fixed freely. We define d(t) the difference between u(t) and the solution to the Volterra equation:

(33)   \begin{equation*}|d(t)| = \Bigg \vline u(t) - u_0 -\int_0^t f(\xi,u(\xi)) d\xi \; \Bigg\vline \end{equation*}

(34)   \begin{equation*} \leq |u(t)-u_{n+1}(t)| + \Bigg\vline\; u_{n+1}(t) - u_0 -\int_0^t f(\xi,u_n(\xi)) d\xi\;\Bigg\vline + L\int_0^t |u_n(\xi)-u(\xi)|\; d\xi \end{equation*}

(35)   \begin{equation*} |d(t)| \leq |u(t)-u_{n+1}(t)|+ L\int_0^t |u_n(\xi)-u(\xi)|\; d\xi = |e_{n+1}|+La|e_{n}| \end{equation*}

since we already notice that |e_n| \to 0 for n\to \infty is clear that |d(t)|=0 for all t\in [0,a].

Separable ODE

Definition – Separable ODE We say that an ordinary differential equation is separable if when it is expressed in normal form is written as:

(36)   \begin{equation*}y'(t) = g(t)h(y) \end{equation*}

where h(y) and g(y) are two C^0 functions defined on the domain of the ODE, this hypothesis grants us the existence of antiderivative for both h(y) and g(y) more detail can be found in G. Gilardi Analisi Matematica di Base.

The solution of a separable ODE in theory is an easy matter, in fact from (36) we obtain,

(37)   \begin{equation*} \frac{1}{h(y)} \frac{dy}{dt} = g(t) \end{equation*}

integrating both sides and simplifying \frac{dy}{dt}dt we obtain:

(38)   \begin{equation*}\int \frac{1}{h(y)}dy = \int g(t) dt \end{equation*}

now if we carry out the integration in (38) we obtain a family of implicit solutions of the form:

(39)   \begin{equation*}H(y) = G(t) + C\end{equation*}

,

where H(y) and G(t) are respectively the antiderivative of \frac{1}{h(y)} and g(t).

In case the separable ODE is part of an initial value problem then the constant C can be found thanks to the condition of the initial value problem.

Since is the first time I speak about implicit solutions I’d like to show an example of the separation of variable technique that I borrowed from the book Advanced Engineering Mathematics by D. G. Zill.

Example Let us consider the following initial value problem,

(40)   \begin{equation*} \begin{cases}y'(x) = - \frac{x}{y} \\ y(4)=3 \end{cases}\end{equation*}

we can solve this by separation of variable, in fact using (38) we obtain:

(41)   \begin{gather*} \int y dy = \int -x dx \\ \frac{1}{2} y^2 = - \frac{1}{2} x^2 + C \\ frac{1}{2} \Big(x^2+y^2\Big)=C \\ x^2+y^2 = 2C \end{gather*}

This tell us that implicit solutions to the ODE are circles centered in the origin of radius \sqrt{2C}. As mentioned before the second condition of the IVP is used to compute the value of the constant C,

(42)   \begin{equation*} (4)^2+(3)^2 = 2C \Leftrightarrow 2C=25 \end{equation*}

So the solution of the IVP is a circle centered at the origin and of radius 5.

Rendered by QuickLaTeX.com

Remark What happens if h(y) takes value 0 at the point y=r ? Well the integrals in equation (38) would lose meaning because the integrand \frac{1}{h(y)} is not defined at the point y=r. Now we notice that if y\equiv r then y'(x)\equiv 0 and therefore y=r is a memeber of the family of implicit solutions of the ODE.

Example – Logistic Growth Model Another nice example of the technique of separation of variables is the logistic growth model. This ODE models the growth of a population (such as a Bacteria population on a Petri dish) under the following assumptions:

  1. If the population is small the growth rate is linear with respect to the population size. In particular, we assume the coefficient of linearity is a.
  2. If the population large enough the growth rate becomes negative, in particular, this occurs when the population size is greater then N.

We usually assume N is equal to 1, in this way x(t) represent the portion of the ideal maximum population. Under this assumption we get the following ODE,

(43)   \begin{equation*} x'(t) =\frac{dx}{dt}= ax(1-x) \end{equation*}

we split \frac{dx}{dt} on the two sides and divide by x(1-x) to obtain the equation,

(44)   \begin{equation*} \frac{1}{x(1-x)} dx = a dt \end{equation*}

we observe that \frac{1}{x(1-x)} = \frac{1}{x} + \frac{1}{1-x} and intgrate both sides:

(45)   \begin{gather*} \int \frac{1}{x} dx + \int \frac{1}{1-x} dx = \int a dt \\ ln|x| + ln|1-x| = at \\ ln \Big| \frac{x}{1-x}\Big| = at \\ \frac{x}{1-x} = e^{at} \Rightleftarrow (1+e^{at})x = e^{at} \\ x(t) = \frac{e^{at}}{1+e^{at}} \end{gather*}

Rendered by QuickLaTeX.com

Integrating Factor

I’d like now to introduce a very nice tool to solve first order ODE, this method is named the integrating factor method. Let us consider a first order ODE written in normal form,

(46)   \begin{equation*} y'(x) = \frac{dy}{dx} = -P(x)y = f(x) \end{equation*}

(47)   \begin{equation*} \frac{dy}{dx} + P(x)y = f(x) \end{equation*}

now we introduce the integrating factor associated to the ODE, defined as follow:

(48)   \begin{equation*} I(x) = \exp\bigg(\int P(x) dx\bigg) = e^{\int P(x) dx} \end{equation*}

we multiply both sides by I(x), split \frac{dy}{dx} and then integrate,

(49)   \begin{equation*} I(x)dy = I(x)f(x) dx \\ \int I(x)dy = I(x)f(x) dx \end{equation*}

(50)   \begin{equation*} e^{\int P(x) dx}y+C=\int e^{\int P(x) dx}f(x) dx \end{equation*}

this lead to the solution, which is given by:

(51)   \begin{equation*} y=Ce^{\int P(x) dx}+e^{\int P(x) dx}\int e^{\int P(x) dx}f(x) dx \end{equation*}

We can also notice that differentiating and using the product rule on (50), i.e. \frac{d[fg]}{dx} = f'(x)g(x)+f(x)g'(x), we obtain the following equation:

(52)   \begin{equation*} \frac{d}{dx}\Big[ e^{\int P(x)dx}y \Big] = e^{\int P(x) dx}f(x) \end{equation*}

if we divide by I(x)=e^{\int P(x)dx} we obtain the original ordinary differential equation,

(53)   \begin{equation*} e^{\int P(x)dx} \frac{dy}{dx} + e^{\int P(x)dx} P(x) = e^{\int P(x)dx}f(x) \end{equation*}

(54)   \begin{equation*} \frac{dy}{dx} + P(x)y = f(x) .\end{equation*}

Example We now consider the following ordinary differential equations,

(55)   \begin{equation*} \frac{dy}{dx} + y =x \end{equation*}

to solve this ODE we use the integrating factor method, we begin by computing the integrating factor:

(56)   \begin{equation*} I(x) = exp\Bigg(\int 1 dx \Bigg) = e^x \end{equation*}

now we multiply the ODE by the integrating factor to obtain an easy to solve ODE,

(57)   \begin{equation*}e^xy'(x) + e^xy(x) = xe^x\end{equation*}

(58)   \begin{equation*}\frac{d}{dx}\Big[e^xy(x)\Big] = xe^x\end{equation*}

now we integrate both sides, using \int fg' = fg - \int f'g to find an explicit expression for y(x):

(59)   \begin{equation*}e^xy(x)= \int xe^x = xe^x - \int e^x \qquad f=x,g=e^x\end{equation*}

(60)   \begin{equation*} y(x) = x-1+Ce^{-x}\end{equation*}

Rendered by QuickLaTeX.com

The above example is borrowed from the book Advanced Engineering Mathematics by D. G. Zill.

Exact ODE

In this section I’d like to introduce a particular class of first order differential equation, exact ODE.

Definition – Exact ODE Given two functions M,N :\Omega \to \mathbb{R}, the differential equation represented by the differential form,

(61)   \begin{equation*}M(x,y)dx+N(x,y)dy = 0\end{equation*}

is said to be an exact ODE if and only if it exist a function f:\Omega\to\mathbb{R} such that:

(62)   \begin{equation*} M(x,y) = \partial_x f(x,y) \qquad \qquad N(x,y)=\partial_y f(x,y) \end{equation*}

Remark The differential form M(x,y)dx+N(x,y)dy = 0 is associated with the ordinary differential equation N(x,y) \frac{dy}{dx} = -M(x,y), that can be expressed in standard form as:

(63)   \begin{equation*} y'(x) = -\frac{M(x,y)}{N(x,y)} \end{equation*}

There is an easy criterion to check if an ODE is exact, and this is based on Schwarz theorem, which states that for a C^\infty function \partial_{xy}f (x,y)=\partial_{yx}f(x,y). Therefore given an exact ODE, we have that the following identities must be verified:

(64)   \begin{equation*}\partial_y M(x,y) = \partial_x N(x,y) \end{equation*}

Exact ODE can be solved easily thanks to the following observations. We observe that since \partial_y f(x,y) = N(x,y) and \partial_x f(x,y) = M(x,y), therefore:

(65)   \begin{equation*}f(x,y)-h(x) = \int \partial_y f(x,y) dy =\int N(x,y) dy \end{equation*}

(66)   \begin{equation*}M(x,y)=\partial_x f(x,y)=h'(x)+ \partial_x \int N(x,y) dy \end{equation*}

now we are left with integrating h'(x), to find an implicit solution to the ordinary differential equation.

Example In this example I’d like to solve the following ODE,

(67)   \begin{equation*}\frac{dy}{dx}=\frac{xy^2-cos(x)sin(x)}{y(1-x^2)}\end{equation*}

so we multiply by y(1-x^2) and then split \frac{dy}{dx} to obtain a differential form,

(68)   \begin{equation*}\Bigg(xy^2-cos(x)sin(x)\Bigg)dx = y(1-x^2) dy\end{equation*}

(69)   \begin{equation*}\Bigg(xy^2-cos(x)sin(x)\Bigg)dx -y(1-x^2) dy=0\end{equation*}

now we recognize that this is an exact differential equation,

(70)   \begin{equation*}\partial_y \Bigg(xy^2-cos(x)sin(x)\Bigg) = 2xy \qquad \partial_x \Bigg(-y+x^2y\Bigg)=2xy\end{equation*}

This tell us that it exist a function f(x,y):\Omega \to \mathbb{R} such that \partial_y f(x,y) = y(x^2-1) and therefore,

(71)   \begin{equation*}f(x,y) = \int \partial_y f(x,y) dy = \int x^2y-y =\frac{y^2}{2}(x^2-1)+h(x)=\frac{x^2y^2}{2}-\frac{y^2}{2}+h(x)\end{equation*}

now we notice that since \partial_x f(x,y) = xy^2-cos(x)sin(x) and therefore,

(72)   \begin{equation*} xy^2+h'(x) = xy^2-cos(x)sin(x)\end{equation*}

(73)   \begin{equation*} h'(x) = -cos(x)sin(x) \end{equation*}

(74)   \begin{equation*} h(x) = \int cos(x)\Big(-sin(x)\Big) = \frac{1}{2}cos^2(x)\end{equation*}

This tell us that the solutions of the ODE are given by the implicit equations for any C\in\mathbb{R},

(75)   \begin{equation*}\frac{y^2}{2}(x^2-1)+\frac{1}{2}cos^2(x)=C\end{equation*}

Rendered by QuickLaTeX.com

Conclusion

In the next post, I’d like to introduce some topics related to the qualitative study of dynamical systems, such as: source, sinks, phase-line. Further, I’d like to introduce notions related to the asymptotic behaviour of a dynamical system, such as the idea of stable solutions. Last studying some modification to the logistic model equation I’d like to introduce the concept of bifurcations and Poincare mapping.

Leave a Reply

Your email address will not be published. Required fields are marked *