Local Existence and Uniqueness of IVP

By | October 9, 2017

In this note, we prove the local existence and uniqueness of initial value problem of  ODEs

    \[ \begin{cases} \frac{du}{dt}\left(t\right)=F\left(u\left(t\right)\right) & \text{for all }t\in I\\ u\left(t_{0}\right)=u_{0}. \end{cases} \]

where F satisfies Lipschitz condition.

1. Function spaces

To discuss our main topic, we need to extend some concepts that we already studied. In linear algebra, we studied vector spaces, which satisfies nine axioms. Such concepts give our concept of `vectors’ in high school as an example. One can also check that the space of real-valued continuous function on \left[a,b\right] is a vector space with usual addition and scalar multiplication. We denote this space by C\left[a,b\right].

In the Euclidean space \mathbb{R}^{n}, we can measure a distance of two vectors \boldsymbol{v}=\left(v_{1},\dots,v_{n}\right) and \boldsymbol{w}=\left(w_{1},\dots,w_{n}\right) by

    \[ \sqrt{\sum_{i=1}^{n}\left(v_{i}-w_{i}\right)^{2}}. \]

Likewise, as C\left[a,b\right] is a vector space, one tries to measure a distance of two vectors. In this case, we want to measure a distance of two functions in C\left[a,b\right]. There is some issues to be considered. First, how can we define a distance on C\left[a,b\right]? Next, the distance is useful?

There are many ways to define a distance on C\left[a,b\right]. Here we pick some special distance. Consider C\left[a,b\right] with \Norm{\cdot}_{\infty}, where

    \[ \Norm f_{\infty}=\sup_{x\in\left[a,b\right]}\left|f\left(x\right)\right|. \]

We check \Norm{\cdot}_{\infty} induces a distance on C\left[a,b\right]. Define d:C\left[a,b\right]\times C\left[a,b\right]\rightarrow\mathbb{R} by d\left(f,g\right)=\Norm{f-g}_{\infty}. To say d is a distance, it should satisfies basic properties which the Euclidean distance

  • d is nonnegative. Indeed, d\left(f,g\right)=\Norm{f-g}_{\infty}=\sup_{x\in\left[a,b\right]}\left|f\left(x\right)-g\left(x\right)\right|\ge0.
  • d\left(f,g\right)=0 if and only if f=g. Indeed, \sup_{x\in\left[a,b\right]}\left|f\left(x\right)-g\left(x\right)\right|=0 implies \left|f\left(y\right)-g\left(y\right)\right|\le\Norm{f-g}_{\infty} for all y\in\left[a,b\right]. So f=g. The converse is obvious.
  • d\left(f,g\right)=d\left(g,f\right). It is also easy since \left|f\left(x\right)-g\left(x\right)\right|=\left|g\left(x\right)-f\left(x\right)\right|.
  • d\left(f,g\right)\le d\left(f,h\right)+d\left(h,g\right) for all f,g,h: Indeed, this follows from the triangle inequality of Euclidean space:

        \[ \left|f\left(x\right)-g\left(x\right)\right|\le\left|f\left(x\right)-h\left(x\right)\right|+\left|h\left(x\right)-g\left(x\right)\right|\le d\left(f,h\right)+d\left(h,g\right). \]


        \[ d\left(f,g\right)\le d\left(f,h\right)+d\left(h,g\right). \]

This shows d is a distance. So \Norm{\cdot}_{\infty} induces a distance d.

To start an mathematical analysis, we need to ensure that C\left[a,b\right] with this kind of distance behaves likes \mathbb{R}^{n}. One of the important property of \mathbb{R}^{n} is completeness. Here our completeness is Cauchy complete, i.e., for any Cauchy sequence \left\{ \boldsymbol{a}_{k}\right\} in \mathbb{R}^{n} , there exists \boldsymbol{a}\in\mathbb{R}^{n} such that \boldsymbol{a}_{k}\rightarrow\boldsymbol{a}. We want to obtain such kind of completeness on C\left[a,b\right] with d.

Now recall the definition of uniform convergence. Let \left\{ f_{n}\right\} be a sequence of real-valued functions on a set E and f a real-valued function on E. We say that \left\{ f_{n}\right\} converges uniformly on E if for every \varepsilon>0, there exists an N=N\left(\varepsilon\right)\in\mathbb{N} such that

    \[ \left|f_{n}\left(x\right)-f\left(x\right)\right|<\varepsilon\quad\text{for all }x\in E\,\text{ and all }n\ge N. \]

The function f is called the uniform limit of the sequence \left\{ f_{n}\right\}. Note that

    \[ \lim_{n\rightarrow\infty}d\left(f_{n},f\right)=\lim_{n\rightarrow\infty}\sup_{x\in\left[a,b\right]}\left|f_{n}\left(x\right)-f\left(x\right)\right|. \]

So the convergence in C\left[a,b\right] is a uniform convergence. Now considering Theorem 5.46 (Cauchy criterion for uniform convergence) in the textbook, for any Cauchy sequences \left\{ f_{n}\right\} in C\left[a,b\right], \left\{ f_{n}\right\} converges uniformly on \left[a,b\right]. Since \left\{ f_{n}\right\} is a sequence of continuous functions on \left[a,b\right] and if we denote f is a uniform limit of \left\{ f_{n}\right\}, then f\in C\left[a,b\right]. This shows C\left[a,b\right] with d is complete.

Complete space has various properties we can discuss in analysis. In the next section, we study one of the important property related to completeness.

2. Contraction Mapping Principle

Recall the Problem 1.14 in our textbook.

A sequence \left\{ a_{n}\right\} of real numbers is said to be \emph{contractive }if there exists a constant 0<\theta<1 such that

    \[ \left|a_{n+1}-a_{n}\right|\le\theta\left|a_{n}-a_{n-1}\right|\quad\text{for all }n\ge2. \]

Prove that \left|a_{n+1}-a_{n}\right|\le\theta^{n-1}\left|a_{2}-a_{1}\right| for all n\ge1. Also prove \left\{ a_{n}\right\} is convergent. Fianlly, if a=\lim_{n\rightarrow\infty}a_{n}, then

    \[ \left|a_{n}-a\right|\le\frac{\theta^{n-1}}{1-\theta}\left|a_{2}-a_{1}\right|\quad\text{for all }n\ge1. \]

In the proof of this problem, we used the Cauchy criterion to ensure \left\{ a_{n}\right\} is convergent. In the previous section, we already observed C\left[a,b\right] with d is complete. So one can think that similar fact must be hold. Indeed, it is true. As an application, we study some convergence of sequences which is recursively defined. It is related to fixed points.

Theorem (Contraction mapping principle). Let \Phi:C\left[a,b\right]\rightarrow C\left[a,b\right] be a contraction, i.e., there exists 0\le\theta<1 such that

    \[ d\left(\Phi\left(f\right),\Phi\left(g\right)\right)\le\theta d\left(f,g\right). \]

Then there exists a unique function f\in C\left[a,b\right] such that \Phi\left(f\right)=f.

Proof. Uniqueness is easy. If \Phi\left(f\right)=f and \Phi\left(g\right)=g, then

    \[ d\left(f,g\right)=d\left(\Phi\left(f\right),\Phi\left(g\right)\right)\le\theta d\left(f,g\right) \]

which can only happen when d\left(f,g\right)=0. So f=g.

Let f_{0}\in C\left[a,b\right]. Define

    \[ f_{n+1}=\Phi\left(f_{n}\right)\quad\left(n=0,1,2,\dots\right). \]

Then d\left(\Phi\left(f_{n}\right),\Phi\left(f_{n-1}\right)\right)\le\theta\Phi\left(f_{n},f_{n-1}\right). Iterate this so that

    \[ d\left(f_{n+1},f_{n}\right)=d\left(\Phi\left(f_{n}\right),\Phi\left(f_{n-1}\right)\right)\le\cdots\le\theta^{n}d\left(f_{1},f_{0}\right). \]

So if n<m, then

    \[ d\left(f_{m},f_{n}\right)\le\frac{\theta^{n}}{1-\theta}d\left(f_{1},f_{0}\right). \]

For \varepsilon>0, choose N so that \frac{\theta^{N}}{1-\theta}d\left(f_{1},f_{0}\right)<\varepsilon. So n,m>N implies d\left(f_{m},f_{n}\right)<\varepsilon. Hence \left\{ f_{n}\right\} is a Cauchy sequence in C\left[a,b\right]. So by the completeness of C\left[a,b\right], there exists f\in C\left[a,b\right] such that

    \[ \lim_{n\rightarrow\infty}\Norm{f_{n}-f}_{\infty}=0. \]

Now from

    \[ d\left(\Phi\left(f_{n}\right),\Phi\left(f\right)\right)\le\theta d\left(f_{n},f\right), \]

we conclude that

    \[ \Phi\left(f\right)=\lim_{n\rightarrow\infty}\Phi\left(f_{n}\right)=\lim_{n\rightarrow\infty}f_{n+1}=f \]

in the uniform limit. This completes the proof.

Remark. The above theorem holds for any complete metric space, which will be studied in Topology I.

3. Existence and uniqueness of ODE

Now we give one application of the contraction mapping principle. We consider the following initival value problem of the first-order ODE: u\in C^{1}\left(I\right)

(1)   \begin{equation*} \begin{cases} \frac{du}{dt}\left(t\right)=F\left(u\left(t\right)\right) & \text{for all }t\in I\\ u\left(t_{0}\right)=u_{0} \end{cases} \end{equation*}

where I\subset\mathbb{R} is an interval containing t_{0} and F is a Lipschitz function. Observe that u\in C\left(I\right) is a solution of (1) if and only if

(2)   \begin{equation*} u\left(t\right)=u_{0}+\int_{t_{0}}^{t}F\left(u\left(s\right)\right)ds\quad\text{for all }t\in I. \end{equation*}

This follows from the fundamental theorem of calculus.

Theorem. Suppose that F:I\rightarrow\mathbb{R} is a Lipschitz function. Then the equation has a unique C^{1}-solution u on I_{\delta}=\left[t_{0}-\delta,t_{0}+\delta\right] for some \delta>0.

Let F be a Lipschitz function with constant K, i.e., there exists a constant K>0 such that

    \[ \left|F\left(x\right)-F\left(y\right)\right|\le K\left|x-y\right| \]

for all x,y\in I.

Proof. It suffices to prove there exists u\in C^{1}\left(I_{\delta}\right) saitsfying (2). Note that as we saw before C\left(I_{\delta}\right) is a complete with \Norm{\cdot}_{\infty}.

Define \Phi_{u_{0}}:C\left(I_{\delta}\right)\rightarrow C\left(I_{\delta}\right) by

    \[ \Phi_{u_{0}}\left(u\right)\left(t\right)=u_{0}+\int_{t_{0}}^{t}F\left(u\left(s\right)\right)ds. \]

If we show \Phi_{u_{0}} has a fixed point u, then this u is a solution of (2), which is a solution of (1).

Observe that

    \begin{align*} d\left(\Phi_{u_{0}}\left(u\right),\Phi_{u_{0}}\left(v\right)\right) & =\sup_{t\in I_{\delta}}\left|\Phi_{u_{0}}\left(u\right)\left(t\right)-\Phi_{u_{0}}\left(v\right)\left(t\right)\right|\\ & =\sup_{t\in I_{\delta}}\left|\int_{t_{0}}^{t}F\left(u\left(s\right)\right)-F\left(v\left(s\right)\right)ds\right|\\ & \le\sup_{t\in I_{\delta}}\int_{t_{0}}^{t}\left|F\left(u\left(s\right)\right)-F\left(v\left(s\right)\right)\right|ds\\ & \le K\sup_{t\in I_{\delta}}\int_{t_{0}}^{t}\left|u\left(s\right)-v\left(s\right)\right|ds\\ & \le K\sup_{t\in I_{\delta}}\int_{t_{0}}^{t}d\left(u,v\right)\\ & \le K\delta d\left(u,v\right). \end{align*}

Now choose \delta>0 so that K\delta<1. Hence by the contraction mapping principle, there exists a unique fixed point u. This completes the proof.

Remark. If we try to use the contraction mapping principle on C^{1}\left(I_{\delta}\right), we have a trouble since C^{1}\left(I_{\delta}\right) is not complete under the distance d\left(f,g\right)=\Norm{f-g}_{\infty}. Note that f_{n}\left(x\right)=\left|x\right|^{1+\frac{1}{n}} converges uniformly to f\left(x\right)=\left|x\right| on \left[-1,1\right].
Also, f_{n}\in C^{1}\left(\left[-1,1\right]\right), but f is not differentiable at 0.

Remark. The kth order ODE given by

    \[ \frac{d^{k}u}{dt}\left(t\right)=F\left(u\left(t\right),\frac{du}{dt}\left(t\right),\dots,\frac{d^{k-1}u}{dt}\left(t\right)\right) \]

can be reduced to the first order ODE. Define \tilde{u}:I\rightarrow\mathbb{R}^{k} by

    \[ \tilde{u}\left(t\right)=\left(u\left(t\right),\frac{du}{dt}\left(t\right),\dots,\frac{d^{k-1}u}{dt}\left(t\right)\right). \]


    \[ \frac{d\tilde{u}}{dt}\left(t\right)=\tilde{F}\left(\tilde{u}\left(t\right)\right) \]

where \tilde{F}:\mathbb{R}^{k}\rightarrow\mathbb{R}^{k} is the function

    \[ \tilde{F}\left(u_{0},\dots,u_{k-1}\right)=\left(u_{1},\dots,u_{k-1},F\left(u_{0},\dots,u_{k-1}\right)\right). \]

Leave a Reply

Your email address will not be published. Required fields are marked *