Monthly Archives: October 2017

Dual norm in R^n

Using the separating hyperplane theorem, we prove the dual norm of the dual norm is the original norm.

1. Extension Theorem

Let \Norm{\cdot} be a norm on \mathbb{R}^{n}. The dual norm of a norm \Norm{\cdot} is defined by

    \[ \Norm y_{*}=\max\left\{ x^{T}y:x\in\mathbb{R}^{n},\Norm x\le1\right\} . \]

For any x\in\mathbb{R}^{n}, we show

    \[ \Norm x=\max\left\{ x^{T}y:y\in\mathbb{R}^{n},\Norm y_{*}\le1\right\} . \]

By definition, we have

    \[ x^{T}y\le\Norm x\Norm y_{*}. \]

So

    \[ \max\left\{ x^{T}y:y\in\mathbb{R}^{n},\Norm y_{*}\le1\right\} \le\Norm x. \]

It remains to prove the reverse inequality. We will show that there exists y\in\mathbb{R}^{n} such that \Norm y_{*}=1 and

(1)   \begin{equation*} x^{T}y=\Norm x. \end{equation*}

Theorem 1. Let A be a subspace of \mathbb{R}^{n} and let f:A\rightarrow\mathbb{R} be a linear functional on A satisfying

    \[ \left|f\left(x\right)\right|\le\Norm x\quad\text{for all }x\in A. \]

Then there exists a\in\mathbb{R}^{n} satisfying

    \[ f\left(x\right)=a^{T}x\quad\text{for all }x\in A \]

and

    \[ \left|a^{T}x\right|\le\Norm x\quad\text{for all }x\in\mathbb{R}^{n}. \]

One can prove this theorem directly. See [1] or [3].However, we prove this theorem using the following lemma:

Lemma 1. Let A be an affine set in \mathbb{R}^{n} and let C be a non-empty convex subset of \mathbb{R}^{n}, not intersecting A. Then there exists a hyperplane H in \mathbb{R}^{n} containing A and not intersecting C.

Proof. By the separating hyperplane theorem, there exist a\in\mathbb{R}^{n}\setminus\left\{ 0\right\} and b\in\mathbb{R} such that

    \[ a^{T}y\ge b\quad\text{for all }x\in A\quad\text{and}\quad a^{T}x\le b\quad\text{for all }x\in C. \]

Define

    \[ H=\left\{ z\in\mathbb{R}^{n}:a^{T}z=b\right\} . \]

Then it is a hyperplane in \mathbb{R}^{n}. There are two cases to consider. First, suppose A\cap H\neq\varnothing. Then there exists x_{0}\in\mathbb{R}^{n} with a^{T}x_{0}=b. Then for x\in A, note that 2x_{0}-x\in A since A is an affine set. So

    \begin{align*} a^{T}\left(2x_{0}-x\right) & \ge b. \end{align*}

On the other hand,

    \[ a^{T}\left(2x_{0}-x\right)=2b-a^{T}x\le b. \]

Thus,

    \[ a^{T}\left(2x_{0}-x\right)=b. \]

This implies

    \[ a^{T}x=b, \]

which proves A\subset H.

Suppose A\cap H=\varnothing. After a translation, we may assume A has 0 so that A is a subspace of \mathbb{R}^{n}. Then b<0 since A\cap H=\varnothing. Note that a^{T}x=0 for all x\in A. If it were not, then there exists x_{0}\in A satisfying a^{T}x_{0}\neq0. If a^{T}x_{0}<0, then for t>0, a^{T}\left(tx_{0}\right)<0. Letting t\rightarrow\infty, then there exists t_{0} with a^{T}\left(tx_{0}\right)<b, which contradicts a^{T}x\ge b. Similarly, a^{T}x_{0}>0 leads a contradiction. Hence a^{T}x=0 for all x\in A.

Note b<0. So a^{T}x<0 for all x\in C and a^{T}x=0 for all x\in A. This shows that \left\{ x\in\mathbb{R}^{n}:a^{T}x=0\right\} is a hyperplane in \mathbb{R}^{n} containing A and not intersecting C.

Thus, both cases gives a desired result. This completes the proof.


Using this lemma, we are ready to prove the theorem.

Proof of Theorem 1 Define

    \[ H=\left\{ x\in A:f\left(x\right)=1\right\} . \]

Then H is convex, affine subspace of \mathbb{R}^{n}. Consider

    \[ C=\left\{ x\in\mathbb{R}^{n}:\Norm x\le1\right\} . \]

Then it is also convex. Then H is a hyperplane containing A. Note that H\cap C=\varnothing since

    \[ \left|f\left(x\right)\right|\le\Norm x\quad\text{for all }x\in A. \]

Hence by Lemma 1, there exists a hyperplane H_{1} containing H and H_{1}\cap C=\varnothing. So there exist a\in\mathbb{R}^{n}\setminus\left\{ 0\right\} and b\in\mathbb{R} such that

    \[ H_{1}=\left\{ x\in\mathbb{R}^{n}:a^{T}x=b\right\} \]

with a^{T}x\le b for all x\in C. Since H_{1}\cap C=\varnothing, 0\notin H_{1}. So b\neq0. Hence we may assume

    \[ H_{1}=\left\{ x\in\mathbb{R}^{n}:a^{T}x=1\right\} . \]

Then a^{T}x\le1 for all x\in C.

Note that if H=\varnothing, then f\left(x_{0}\right)\neq1 for all x_{0}\in A. Since f is linear, f is the zero map. So there is nothing to prove. So we assume H\neq\varnothing. Then there exists x_{0}\in A with f\left(x_{0}\right)=1.

For nonzero vector x\in\mathbb{R}^{n}, \pm\frac{x}{\Norm x}\in C. So

    \[ a^{T}\left(\pm\frac{x}{\Norm x}\right)\le1 \]

and this shows

    \[ \left|a^{T}x\right|\le\Norm x\quad\text{for all }x\in\mathbb{R}^{n}. \]

If x\in A with f\left(x\right)\neq0, then

    \[ a^{T}\left(\frac{x}{f\left(x\right)}\right)=1 \]

and so

    \[ f\left(x\right)=a^{T}x. \]

If f\left(x\right)=0 for some x\in A, then

    \[ f\left(x+x_{0}\right)=1 \]

and so

    \[ a^{T}\left(x+x_{0}\right)=1. \]

Since x_{0}\in A, a^{T}x=0. So

    \[ f(x) =a^T x\quad \text{for all } x\in A \]

and

    \[ |a^T x|\leq \Norm{x} \quad \text{for all } x \in \mathbb{R}^n. \]

This completes the proof of theorem.


2. Proof of the duality

Now we are ready to prove the claim (1). If x=0, there is nothing to prove. If x\neq0, consider A=\mathbb{R}x. Define f:A\rightarrow\mathbb{R} by

(2)   \begin{equation*} f\left(tx\right)=t\Norm x^{2}. \end{equation*}

Clearly it is linear, f\left(x\right)=\Norm x^{2} and

    \begin{align*} \Norm f & =\max_{\Norm z\le1}\left|f\left(z\right)\right|\\ & =\max_{\Norm{tx}\le1}\left|t\Norm x^{2}\right|=\Norm x. \end{align*}

Also,

    \[ \left|f\left(z\right)\right|\le\Norm f\Norm z \]

for all z\in A. Hence by Theorem 1, there exists a\in\mathbb{R}^{n}\setminus\left\{ 0\right\} with

    \[ f\left(z\right)=a^{T}z\quad\text{for all }z\in A \]

with

    \[ \left|a^{T}z\right|\le\Norm f\Norm z\quad\text{for all }z\in\mathbb{R}^{n}. \]

Since

    \[ \Norm f=\max_{z\in A,\Norm z\le1}\left|a^{T}z\right|\le\max_{z\in A,\Norm z\le1}\left|a^{T}z\right|\le\Norm f, \]

we conclude that

    \[ \max_{z\in A,\Norm z\le1}\left|a^{T}z\right|=\Norm f=\Norm x \]

From (2), if we take \tilde{a}=\frac{a}{\Norm x}, then \Norm{\tilde{a}}_{*}=1 and \tilde{a}^{T}x=\Norm x, which proves the claim.

Therefore,

    \[ \Norm x=\max\left\{ x^{T}y:\Norm y_{*}=1\right\} . \]

3. Some remarks

The motivation of the proof is the philosophy that the Hahn-Banach theorem and the separating hyperplane theorem are closely related. We can prove the separating hyperplane theorem using an extension of Theorem 1. One may ask whether the converse is true. In fact, both statements are equivalent.

The proof of Theorem 1 using Lemma 1 is given in [2]. We modify the lemma and theorem and their proof so that it is suitable to our setting.

References

  1. H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Springer, 2011.
  2. H. H. Schaefer and M. P. Wolff, Topological vector spaces, Springer, 1966.
  3. E. Stein and R. Sharkarchi, Functional Analysis: Introduction to Further Topics in Analysis, Princeton University Press, 2011.

Local Existence and Uniqueness of IVP

In this note, we prove the local existence and uniqueness of initial value problem of  ODEs

    \[ \begin{cases} \frac{du}{dt}\left(t\right)=F\left(u\left(t\right)\right) & \text{for all }t\in I\\ u\left(t_{0}\right)=u_{0}. \end{cases} \]

where F satisfies Lipschitz condition.

1. Function spaces

To discuss our main topic, we need to extend some concepts that we already studied. In linear algebra, we studied vector spaces, which satisfies nine axioms. Such concepts give our concept of `vectors’ in high school as an example. One can also check that the space of real-valued continuous function on \left[a,b\right] is a vector space with usual addition and scalar multiplication. We denote this space by C\left[a,b\right].

In the Euclidean space \mathbb{R}^{n}, we can measure a distance of two vectors \boldsymbol{v}=\left(v_{1},\dots,v_{n}\right) and \boldsymbol{w}=\left(w_{1},\dots,w_{n}\right) by

    \[ \sqrt{\sum_{i=1}^{n}\left(v_{i}-w_{i}\right)^{2}}. \]

Likewise, as C\left[a,b\right] is a vector space, one tries to measure a distance of two vectors. In this case, we want to measure a distance of two functions in C\left[a,b\right]. There is some issues to be considered. First, how can we define a distance on C\left[a,b\right]? Next, the distance is useful?

There are many ways to define a distance on C\left[a,b\right]. Here we pick some special distance. Consider C\left[a,b\right] with \Norm{\cdot}_{\infty}, where

    \[ \Norm f_{\infty}=\sup_{x\in\left[a,b\right]}\left|f\left(x\right)\right|. \]

We check \Norm{\cdot}_{\infty} induces a distance on C\left[a,b\right]. Define d:C\left[a,b\right]\times C\left[a,b\right]\rightarrow\mathbb{R} by d\left(f,g\right)=\Norm{f-g}_{\infty}. To say d is a distance, it should satisfies basic properties which the Euclidean distance
has.

  • d is nonnegative. Indeed, d\left(f,g\right)=\Norm{f-g}_{\infty}=\sup_{x\in\left[a,b\right]}\left|f\left(x\right)-g\left(x\right)\right|\ge0.
  • d\left(f,g\right)=0 if and only if f=g. Indeed, \sup_{x\in\left[a,b\right]}\left|f\left(x\right)-g\left(x\right)\right|=0 implies \left|f\left(y\right)-g\left(y\right)\right|\le\Norm{f-g}_{\infty} for all y\in\left[a,b\right]. So f=g. The converse is obvious.
  • d\left(f,g\right)=d\left(g,f\right). It is also easy since \left|f\left(x\right)-g\left(x\right)\right|=\left|g\left(x\right)-f\left(x\right)\right|.
  • d\left(f,g\right)\le d\left(f,h\right)+d\left(h,g\right) for all f,g,h: Indeed, this follows from the triangle inequality of Euclidean space:

        \[ \left|f\left(x\right)-g\left(x\right)\right|\le\left|f\left(x\right)-h\left(x\right)\right|+\left|h\left(x\right)-g\left(x\right)\right|\le d\left(f,h\right)+d\left(h,g\right). \]

    So

        \[ d\left(f,g\right)\le d\left(f,h\right)+d\left(h,g\right). \]

This shows d is a distance. So \Norm{\cdot}_{\infty} induces a distance d.

To start an mathematical analysis, we need to ensure that C\left[a,b\right] with this kind of distance behaves likes \mathbb{R}^{n}. One of the important property of \mathbb{R}^{n} is completeness. Here our completeness is Cauchy complete, i.e., for any Cauchy sequence \left\{ \boldsymbol{a}_{k}\right\} in \mathbb{R}^{n} , there exists \boldsymbol{a}\in\mathbb{R}^{n} such that \boldsymbol{a}_{k}\rightarrow\boldsymbol{a}. We want to obtain such kind of completeness on C\left[a,b\right] with d.

Now recall the definition of uniform convergence. Let \left\{ f_{n}\right\} be a sequence of real-valued functions on a set E and f a real-valued function on E. We say that \left\{ f_{n}\right\} converges uniformly on E if for every \varepsilon>0, there exists an N=N\left(\varepsilon\right)\in\mathbb{N} such that

    \[ \left|f_{n}\left(x\right)-f\left(x\right)\right|<\varepsilon\quad\text{for all }x\in E\,\text{ and all }n\ge N. \]

The function f is called the uniform limit of the sequence \left\{ f_{n}\right\}. Note that

    \[ \lim_{n\rightarrow\infty}d\left(f_{n},f\right)=\lim_{n\rightarrow\infty}\sup_{x\in\left[a,b\right]}\left|f_{n}\left(x\right)-f\left(x\right)\right|. \]

So the convergence in C\left[a,b\right] is a uniform convergence. Now considering Theorem 5.46 (Cauchy criterion for uniform convergence) in the textbook, for any Cauchy sequences \left\{ f_{n}\right\} in C\left[a,b\right], \left\{ f_{n}\right\} converges uniformly on \left[a,b\right]. Since \left\{ f_{n}\right\} is a sequence of continuous functions on \left[a,b\right] and if we denote f is a uniform limit of \left\{ f_{n}\right\}, then f\in C\left[a,b\right]. This shows C\left[a,b\right] with d is complete.

Complete space has various properties we can discuss in analysis. In the next section, we study one of the important property related to completeness.

2. Contraction Mapping Principle

Recall the Problem 1.14 in our textbook.

A sequence \left\{ a_{n}\right\} of real numbers is said to be \emph{contractive }if there exists a constant 0<\theta<1 such that

    \[ \left|a_{n+1}-a_{n}\right|\le\theta\left|a_{n}-a_{n-1}\right|\quad\text{for all }n\ge2. \]

Prove that \left|a_{n+1}-a_{n}\right|\le\theta^{n-1}\left|a_{2}-a_{1}\right| for all n\ge1. Also prove \left\{ a_{n}\right\} is convergent. Fianlly, if a=\lim_{n\rightarrow\infty}a_{n}, then

    \[ \left|a_{n}-a\right|\le\frac{\theta^{n-1}}{1-\theta}\left|a_{2}-a_{1}\right|\quad\text{for all }n\ge1. \]

In the proof of this problem, we used the Cauchy criterion to ensure \left\{ a_{n}\right\} is convergent. In the previous section, we already observed C\left[a,b\right] with d is complete. So one can think that similar fact must be hold. Indeed, it is true. As an application, we study some convergence of sequences which is recursively defined. It is related to fixed points.

Theorem (Contraction mapping principle). Let \Phi:C\left[a,b\right]\rightarrow C\left[a,b\right] be a contraction, i.e., there exists 0\le\theta<1 such that

    \[ d\left(\Phi\left(f\right),\Phi\left(g\right)\right)\le\theta d\left(f,g\right). \]

Then there exists a unique function f\in C\left[a,b\right] such that \Phi\left(f\right)=f.

Proof. Uniqueness is easy. If \Phi\left(f\right)=f and \Phi\left(g\right)=g, then

    \[ d\left(f,g\right)=d\left(\Phi\left(f\right),\Phi\left(g\right)\right)\le\theta d\left(f,g\right) \]

which can only happen when d\left(f,g\right)=0. So f=g.

Let f_{0}\in C\left[a,b\right]. Define

    \[ f_{n+1}=\Phi\left(f_{n}\right)\quad\left(n=0,1,2,\dots\right). \]

Then d\left(\Phi\left(f_{n}\right),\Phi\left(f_{n-1}\right)\right)\le\theta\Phi\left(f_{n},f_{n-1}\right). Iterate this so that

    \[ d\left(f_{n+1},f_{n}\right)=d\left(\Phi\left(f_{n}\right),\Phi\left(f_{n-1}\right)\right)\le\cdots\le\theta^{n}d\left(f_{1},f_{0}\right). \]

So if n<m, then

    \[ d\left(f_{m},f_{n}\right)\le\frac{\theta^{n}}{1-\theta}d\left(f_{1},f_{0}\right). \]

For \varepsilon>0, choose N so that \frac{\theta^{N}}{1-\theta}d\left(f_{1},f_{0}\right)<\varepsilon. So n,m>N implies d\left(f_{m},f_{n}\right)<\varepsilon. Hence \left\{ f_{n}\right\} is a Cauchy sequence in C\left[a,b\right]. So by the completeness of C\left[a,b\right], there exists f\in C\left[a,b\right] such that

    \[ \lim_{n\rightarrow\infty}\Norm{f_{n}-f}_{\infty}=0. \]

Now from

    \[ d\left(\Phi\left(f_{n}\right),\Phi\left(f\right)\right)\le\theta d\left(f_{n},f\right), \]

we conclude that

    \[ \Phi\left(f\right)=\lim_{n\rightarrow\infty}\Phi\left(f_{n}\right)=\lim_{n\rightarrow\infty}f_{n+1}=f \]

in the uniform limit. This completes the proof.

Remark. The above theorem holds for any complete metric space, which will be studied in Topology I.

3. Existence and uniqueness of ODE

Now we give one application of the contraction mapping principle. We consider the following initival value problem of the first-order ODE: u\in C^{1}\left(I\right)

(1)   \begin{equation*} \begin{cases} \frac{du}{dt}\left(t\right)=F\left(u\left(t\right)\right) & \text{for all }t\in I\\ u\left(t_{0}\right)=u_{0} \end{cases} \end{equation*}

where I\subset\mathbb{R} is an interval containing t_{0} and F is a Lipschitz function. Observe that u\in C\left(I\right) is a solution of (1) if and only if

(2)   \begin{equation*} u\left(t\right)=u_{0}+\int_{t_{0}}^{t}F\left(u\left(s\right)\right)ds\quad\text{for all }t\in I. \end{equation*}

This follows from the fundamental theorem of calculus.

Theorem. Suppose that F:I\rightarrow\mathbb{R} is a Lipschitz function. Then the equation has a unique C^{1}-solution u on I_{\delta}=\left[t_{0}-\delta,t_{0}+\delta\right] for some \delta>0.

Let F be a Lipschitz function with constant K, i.e., there exists a constant K>0 such that

    \[ \left|F\left(x\right)-F\left(y\right)\right|\le K\left|x-y\right| \]

for all x,y\in I.

Proof. It suffices to prove there exists u\in C^{1}\left(I_{\delta}\right) saitsfying (2). Note that as we saw before C\left(I_{\delta}\right) is a complete with \Norm{\cdot}_{\infty}.

Define \Phi_{u_{0}}:C\left(I_{\delta}\right)\rightarrow C\left(I_{\delta}\right) by

    \[ \Phi_{u_{0}}\left(u\right)\left(t\right)=u_{0}+\int_{t_{0}}^{t}F\left(u\left(s\right)\right)ds. \]

If we show \Phi_{u_{0}} has a fixed point u, then this u is a solution of (2), which is a solution of (1).

Observe that

    \begin{align*} d\left(\Phi_{u_{0}}\left(u\right),\Phi_{u_{0}}\left(v\right)\right) & =\sup_{t\in I_{\delta}}\left|\Phi_{u_{0}}\left(u\right)\left(t\right)-\Phi_{u_{0}}\left(v\right)\left(t\right)\right|\\ & =\sup_{t\in I_{\delta}}\left|\int_{t_{0}}^{t}F\left(u\left(s\right)\right)-F\left(v\left(s\right)\right)ds\right|\\ & \le\sup_{t\in I_{\delta}}\int_{t_{0}}^{t}\left|F\left(u\left(s\right)\right)-F\left(v\left(s\right)\right)\right|ds\\ & \le K\sup_{t\in I_{\delta}}\int_{t_{0}}^{t}\left|u\left(s\right)-v\left(s\right)\right|ds\\ & \le K\sup_{t\in I_{\delta}}\int_{t_{0}}^{t}d\left(u,v\right)\\ & \le K\delta d\left(u,v\right). \end{align*}

Now choose \delta>0 so that K\delta<1. Hence by the contraction mapping principle, there exists a unique fixed point u. This completes the proof.

Remark. If we try to use the contraction mapping principle on C^{1}\left(I_{\delta}\right), we have a trouble since C^{1}\left(I_{\delta}\right) is not complete under the distance d\left(f,g\right)=\Norm{f-g}_{\infty}. Note that f_{n}\left(x\right)=\left|x\right|^{1+\frac{1}{n}} converges uniformly to f\left(x\right)=\left|x\right| on \left[-1,1\right].
Also, f_{n}\in C^{1}\left(\left[-1,1\right]\right), but f is not differentiable at 0.

Remark. The kth order ODE given by

    \[ \frac{d^{k}u}{dt}\left(t\right)=F\left(u\left(t\right),\frac{du}{dt}\left(t\right),\dots,\frac{d^{k-1}u}{dt}\left(t\right)\right) \]

can be reduced to the first order ODE. Define \tilde{u}:I\rightarrow\mathbb{R}^{k} by

    \[ \tilde{u}\left(t\right)=\left(u\left(t\right),\frac{du}{dt}\left(t\right),\dots,\frac{d^{k-1}u}{dt}\left(t\right)\right). \]

Then

    \[ \frac{d\tilde{u}}{dt}\left(t\right)=\tilde{F}\left(\tilde{u}\left(t\right)\right) \]

where \tilde{F}:\mathbb{R}^{k}\rightarrow\mathbb{R}^{k} is the function

    \[ \tilde{F}\left(u_{0},\dots,u_{k-1}\right)=\left(u_{1},\dots,u_{k-1},F\left(u_{0},\dots,u_{k-1}\right)\right). \]

Space-filling curve

Theorem. There exists a continuous curve in \mathbb{R}^{2} that passes through every point of the unit square \left[0,1\right]\times\left[0,1\right].

Proof. Define \phi:\left[0,2\right]\rightarrow\mathbb{R} by

    \[ \phi\left(t\right)=\begin{cases} 0 & \text{if }0\le t\le\frac{1}{3},\quad\text{or if }\frac{5}{3}\le t\le2,\\ 3t-1 & \text{if }\frac{1}{3}\le t\le\frac{2}{3}\\ 1 & \text{if }\frac{2}{3}\le t\le\frac{4}{3}\\ -3t+5 & \text{if }\frac{4}{3}\le t\le\frac{5}{3}. \end{cases} \]

Extend \phi to all of \mathbb{R} by making \phi periodic with period 2.

Define

    \[ f_{1}\left(t\right)=\sum_{n=1}^{\infty}\frac{\phi\left(3^{2n-2}t\right)}{2^{n}},\quad f_{2}\left(t\right)=\sum_{n=1}^{\infty}\frac{\phi\left(3^{2n-1}t\right)}{2^{n}}. \]

By the Weierstrass M-test, both f_{1} and f_{2} converges uniformly on \mathbb{R}. Moreover, it is continuous on \mathbb{R}. Now define f=\left(f_{1},f_{2}\right) and let \Gamma denote the image of the unit interval \left[0,1\right] under f. We show \Gamma=\left[0,1\right]\times\left[0,1\right].

Observe 0\le f_{1}\left(t\right)\le1 and 0\le f_{2}\left(t\right)\le1. Hence \Gamma\subset\left[0,1\right]\times\left[0,1\right]. Let \left(a,b\right)\in\left[0,1\right]\times\left[0,1\right]. Write

    \[ a=\sum_{n=1}^{\infty}\frac{a_{n}}{2^{n}},\quad b=\sum_{n=1}^{\infty}\frac{b_{n}}{2^{n}} \]

with a_{n},b_{n}\in\left\{ 0,1\right\}. Now let

    \[ c=2\sum_{n=1}^{\infty}\frac{c_{n}}{3^{n}},\quad\text{where }c_{2n-1}=a_{n},\quad c_{2n}=b_{n}. \]

Then 0\le c\le1 since 2\sum_{n=1}^{\infty}\frac{1}{3^{n}}=1. Now we show f_{1}\left(c\right)=a and f_{2}\left(c\right)=b. We show \phi\left(3^{k}c\right)=c_{k+1} for each k=0,1,2,… If we can show this, then we have \phi\left(3^{2n-2}c\right)=c_{2n-1}, \phi\left(3^{2n-1}c\right)=c_{2n}=b_{n} and this gives f_{1}\left(c\right)=a, f_{2}\left(c\right)=b.

Now write

    \begin{align*} 3^{k}c & =2\sum_{n=1}^{\infty}\frac{c_{n}}{3^{n-k}}+2\sum_{n=k+1}^{\infty}\frac{c_{n}}{3^{n-k}}\\ & =\text{even integer}+d_{k}, \end{align*}

where

    \[ d_{k}=2\sum_{n=1}^{\infty}\frac{c_{n+k}}{3^{n}}. \]

Since \phi has period 2, we have

    \[ \phi\left(3^{k}c\right)=\phi\left(d_{k}\right). \]

If c_{k+1}=0, then 0\le d_{k}\le2\sum_{n=2}^{\infty}3^{-n}=\frac{1}{3} and hence \phi\left(d_{k}\right)=0. So \phi\left(3^{k}c\right)=c_{k+1} in this case. If c_{k+1}=1, then \frac{2}{3}\le d_{k}\le1 and hence \phi\left(d_{k}\right)=1. Therefore, \phi\left(3^{k}c\right)=c_{k+1}. This proves f_{1}\left(c\right)=a, f_{2}\left(c\right)=b. So \Gamma=\left[0,1\right]\times\left[0,1\right].

Meaning of this theorem. Observe that f:\mathbb{R}\rightarrow\left[0,1\right]\times\left[0,1\right] is continuous. \mathbb{R} has dimension 1 and \left[0,1\right]\times\left[0,1\right] has dimension 2. By the above theorem, the continuity does not guarantee the dimension of spaces. The above curve we constructed is nowhere differentiable. This is proved by Alsina.

It seems that this kind of curve is not useful, but it has quite a lot of applications. One can use this kind of fact to probability theory, topology, etc. It has an application to industry, e.g. Google map.

References

  1.  J. Alsina,  The Peano curve of Schoenberg is nowhere differentiable, Journal of Approximation theory, Vol. 33 (1), 28– 42.
  2. T. Apostol, Mathematical Analysis
  3. C. S. Perone, Google’s S2, geometry on the sphere, cells and Hilbert curve