MATH 131BH: Analysis (Honors)
Spring 2025, I. Kim
We follow UCLA’s 131BH curriculum; the syllabus is linked here. Please let me know at kennyguo@ucla.edu if you spot any errors. Thanks!
Homework Solutions: (in progress)
- Homework 1 (compactness)
- Homework 2 (continuity, connectedness)
- Homework 3 (differentiability)
- Homework 4 (integrability)
- Homework 5 (integrability)
- Midterm :)
- Homework 6 (sequences of functions, uniform convergence)
- Homework 7 (Arzela-Ascoli, Stone-Weierstrass)
- Homework 8 (multivariable differentiability)
- Show that in the metric space \((l^2, d_2)\), the set \(E:=\{x=(x_n) : \sum_{n\geq1} n |x_n|^2 \leq 1\}\) is compact.
Proof.
Let \((x_k)_{k \geq 1} \subseteq E\). We have for all \(k \geq 1\), \(\sum_{n\geq 1} n|x_k(n)|^2 \leq 1\), so namely for all \(n \geq 1\), we have \(|x_k(n)| \leq 1/\sqrt{n}\).
In particular, for a fixed \(n \geq 1\), we have that \((x_k(n))_{k \geq 1}\) is bounded, and by Bolzano-Weierstrass, there exists a convergent subsequence. Continuing along for all \(n\), by the diagonal argument, we construct a subsequence, call it \((x_{k_j}(n))_{j \geq 1}\), such that for all \(n \geq 1\), \(\lim_{j \rightarrow \infty} x_{k_j}(n)\) exists, call it \(x_\infty(n)\), and let \(x_\infty = (x_\infty(n))_{n\geq 1}\). Thus, we show a) \(x_\infty \in E\) and b) \(x_{k_j} \rightarrow_{j \rightarrow \infty} x_\infty\).
Note that: \[ \sum_{n\geq 1} n |x_\infty(n)|^2 = \sum_{n\geq 1} n \lim_{j\rightarrow \infty} |x_{k_j}(n)|^2 = \lim_{N\rightarrow \infty} \sum_{n=1}^N n \lim_{j\rightarrow \infty} |x_{k_j}(n)|^2 = \lim_{N\rightarrow \infty} \lim_{j\rightarrow \infty} \underbrace{\sum_{n=1}^N n |x_{k_j}(n)|^2}_{\leq 1 \text{ since } x_{k_j} \in E \text{ for all } j} \leq 1 \] and thus, \(x_\infty \in E\).
Let \(\varepsilon > 0\). For any \(N \geq 1\), note we have the inequality: \[ \sum_{n=N}^\infty |x_{k_j}(n) - x_\infty(n)|^2 \leq 2 \sum_{n=N}^\infty |x_{k_j}(n)|^2 + 2 \sum_{n=N}^\infty |x_{\infty}(n)|^2 \] Furthermore, for all \(x \in E\), we have: \[ N \sum_{n=N}^\infty |x(n)|^2 \leq \sum_{n=N}^\infty n|x(n)|^2 \leq 1 \implies \sum_{n=N}^\infty |x(n)|^2 \leq 1/N \] and so for all \(N\geq 1\), \(\sum_{n=N}^\infty |x_{k_j}(n) - x_\infty(n)|^2 \leq 2(2/N) = 4/N\).
With this, fix an \(N_\varepsilon\) s.t. \(4/N_\varepsilon < \varepsilon^2/2\), so that \(\sum_{n=N_\varepsilon}^\infty |x_{k_j}(n) - x_\infty(n)|^2 \leq \varepsilon^2/2\).
Now, since for \(n = 1, \ldots, N_\varepsilon - 1\), we have that \(x_{k_j}(n) \rightarrow_{j\rightarrow \infty} x_\infty(n)\), we can choose a \(j_\varepsilon\) such that \(\sum_{n=1}^{N_\varepsilon - 1} |x_{k_j}(n) - x_\infty(n)|^2 \leq \varepsilon^2/2\). Thus, let \(j \geq j_\varepsilon\), and we see \[ \sum_{n=1}^\infty |x_{k_j}(n) - x_\infty(n)|^2 = \sum_{n=1}^{N_\varepsilon -1} |x_{k_j}(n) - x_\infty(n)|^2 + \sum_{n=N_\varepsilon}^\infty |x_{k_j}(n) - x_\infty(n)|^2 < \varepsilon^2/2 + \varepsilon^2/2 = \varepsilon^2 \] and so \(d_2(x_{k_j}, x_\infty) < \varepsilon\). Thus, \(x_{k_j} \rightarrow_{j \rightarrow \infty} x_\infty\), and so \(E\) is sequentially compact, and thus, compact. \(\square\)
Remark. This is a proof that doesn’t rely explicitly on Heine-Borel, as we didn’t have this theorem at our disposal.
- Show that if \((K, d)\) is sequentially compact, then \((K,d)\) has a countable base.
- Using (a), show that every open cover of \(K\) has a countable subcover.
Proof.
- One can show \(K\) is totally-bounded, that is for all \(r>0\), there exists a finite number of balls such that they form a cover of \(K\), i.e., there exists \(n \in \mathbb{N}\), \(x_1, \ldots, x_n \in K\) such that \(K \subseteq \bigcup_{i=1}^n B_r(x_i)\). To construct our countable base, for all \(k \in \mathbb{N}\), consider the finite cover of K \(\{B_{1/k}(x_i)\}_{i=1}^{n_k}\), and we claim then \(\bigcup_{k=1}^\infty \{B_{1/k}(x_i)\}_{i=1}^{n_k}\) is a countable (as countable union is countable) base for \(K\).
Let \(x_0 \in K\) and \(x_0 \in G\) open. Thus, there exists some \(N \in \mathbb{N}\) such that \(B_{1/N}(x_0) \subseteq G\). But by total boundedness, we also know we chose \(x_1, \ldots, x_{n_{2N}}\) such that \(x_0 \in \bigcup_{i=1}^{n_{2N}} B_{1/(2N)}(x_i)\). If \(x_0 \in \{x_1, \ldots, x_{n_{2N}}\}\), we are done, since \(B_{1/(2N)}(x_0)\) is a set in our base. If else, then there is a \(x_i\) such that \(x_0 \in B_{1/(2N)}(x_i)\). Let \(y \in B_{1/(2N)}(x_i)\). By triangle inequality, we have: \[ d(y, x_0) \leq d(y, x_i) + d(x_i, x_0) < 1/(2N) + 1/(2N) = 1/N \] and so \(y \in B_{1/(N)}(x_0)\), and we have that \(x_0 \in B_{1/(2N)}(x_i) \subseteq B_{1/(N)}(x_0) \subseteq G\), so our set \(\bigcup_{k=1}^\infty \{B_{1/k}(x_i)\}_{i=1}^{n_k}\) is in fact a (countable) base.
- Let \(\{G_\alpha\}\) be an open cover for \(K\). Let \(x \in K\), so \(x \in G_\alpha\) open for some \(\alpha\). Using the countable base of balls in part (a), we have that there exists some \(n_x \in \mathbb{N}, x_i\) such that \(x \in B_{1/n_x}(x_i) \subseteq G_\alpha\). Since there are only countably many balls in this base, we can select at most countably many \(G_\alpha\) from \(\{G_\alpha\}\), so that for all \(x \in K\), \(x\) is still contained in one of them, and so we extract our desired countable subcover. \(\square\)
- Show that \(K\) is sequentially compact if and only if every infinite subset of \(K\) has a limit/accumulation point in \(K\).
Proof.
\((\implies)\) Assume \(K\) is sequentially compact, and let \(A \subseteq K\) be infinite (if can’t find, vacuously true). Thus, we can extract a sequence \(\{a_n\}_{n\geq 1} \subseteq A\) of non-repeating terms. By sequential compactness, there exists a convergent subsequence, call it \(\{a_{k_n}\}_{n\geq 1} \rightarrow_{n\rightarrow \infty} a \in K\). Finally, note that \(a \in A'\). Indeed, let \(r > 0\), and by limit definition, there is an \(n_r \in \mathbb{N}\) such that for all \(n \geq n_r\), \(d(a_n, a) < r\), and so \(a_n \in B_r(a) \cap A \backslash \{a\}\) (since the \(a_n\) are distinct), and so namely it is nonempty, and \(a\) is an accumulation point of \(A\) in \(K\).
\((\impliedby)\) Assume for all infinite \(A \subseteq K\), \(A' \cap K \neq \emptyset\). Let \(\{a_n\}_{n\geq 1} \subseteq K\). If \(\{a_n\}_{n\geq 1}\) infinitely repeats a term, then this will be a (constant) convergent subsequence. Thus, assume doesn’t infinitely repeat terms, and so the set \(\{a_n : n \geq 1\}\) is infinite. By assumption, there exists an \(a \in A' \cap K\).
From this, we construct our convergent subsequence. We have \(B_1(a) \cap A \backslash \{a\} \neq \emptyset\), so extract \(a_{k_1}\). Then for \(n \geq 2\), extract \(a_{k_n}\) from \(B_r(a) \cap A \backslash \{a\} \neq \emptyset\), where \(r = \min\{1/n, d(a, a_{k_{n-1}})\}\). It follows that \(\{a_{k_n}\}_{n\geq 1}\) is a convergent subsequence, and thus, \(K\) is sequentially compact. \(\square\).
- Let \(K, E \subseteq X\), where \(K\) is compact and \(E\) is closed in \((X,d)\).
- If \(K \cap E = \emptyset\), then show that there is a constant \(c > 0\) such that \[ d(x, E) := \inf\{d(x,y) : y \in E\} \geq c \text{ for all } x \in K.\]
- Is (a) true if \(K\) is only closed? Give a counterexample.
Proof.
- Suppose for contradiction that for all \(c > 0\), there exists \(x \in K\) such that \(\inf\{d(x,y) : y \in E\} < c\). For all \(n \geq 1\), let \(c = 1/n\). Thus, there is a \(x_n \in K\) such that \(d(x_n, E) < 1/n\). In particular, \(1/n\) is not a lower bound, so there exists a \(y_n \in E\) such that \(d(x_n, y_n) < 1/n\). With this, we construct arbitrarily close \(\{x_n\}_{n\geq 1} \subseteq K, \{y_n\}_{n\geq 1} \subseteq E\).
Since \(K\) is sequentially compact, there exists a convergent subsequence \(\{x_{k_n}\}_{n\geq 1} \rightarrow x_0 \in K\). Since \(\{y_n\}_{n\geq 1}\) is arbitrarily close, we have that \(\{y_{k_n}\}_{n\geq 1} \rightarrow x_0 \in \bar{E}\) as well. Since \(E\) is closed, \(x_0 \in E\). This contradicts \(K \cap E\) being empty. \(\square\)
- No. Compactness is necessary, and our counterexample relies on finding arbitrarily close sequences in \(K\) and \(E\), but a convergent subsequence is not necessarily guaranteed (say, approaches infinity) by compactness.
Take \((X,d) = (\mathbb{R}, |\cdot|)\) and consider \(K = \{n+1/n : n\geq 2\}\) and \(E = \{n : n \geq 2\}\). These sets are indeed closed, but neither is compact (consider the open cover of \(\varepsilon = 1/2\) balls around each point, which cannot be reduced to a finite subcover). Finally, note \(K \cap E = \emptyset\).
Considering \(\{x_n\}_{n \geq 2} \subseteq K, x_n = n + 1/n\) and \(\{y_n\}_{n \geq 2} \subseteq E, x_n = n\) (note these are arbitrarily close with no convergent subsequence). Thus, we see for all \(c > 0\), there exists an \(n\) such that \(|x_n - y_n| < c\), which gives the desired negation.
- Let \(A\) be a subset of a complete metric space. Assume that for all \(\varepsilon > 0\), there exists a compact subset \(A_\varepsilon\) so that for any \(x \in A, d(x, A_\varepsilon) < \varepsilon)\). Show that \(\bar{A}\) is compact.
Proof.
We show sequential compactness. Let \((x_n) \subseteq \bar{A}\).
We first extract a sequence from \(A\) that is arbitrary close to \((x_n)\), so any subsequence will converge to the same limit. Note that for all \(n \geq 1\), \(x_n \in \bar{A}\), which means there exists \((a_{k_n}) \subseteq A\) such that \(a_{k_n} \rightarrow x_n\). In particular, there is an \(a_n \in A\) such that \(d(x_n, a_n) < 1/n\), and so we get \((a_n) \subseteq A\), and it suffices to find a convergent subsequence of this.
Consider \(A_1\), so for all \(a \in A\), \(d(a, A_1) < 1\). Thus, for all \(n \geq 1\), \(\inf\{d(a_n, y) : y \in A_1\} < 1\), so \(1\) is not a lower bound, so there exists \(y \in A_1\) such that \(d(a_n, y) < 1\). Consider the open cover of \(A_1\) given by \(\{B_1(y)\}_{y \in A_1}\), which admits a finite subcover, say \(\{B_1(y_i)\}_{i=1}^n\). Increase the radius and now consider \(\{B_{2}(y_i)\}_{i=1}^n\). We can now see for all \(n \geq 1\), \(a_n \in \bigcap_{i=1}^n B_{2}(y_i)\). This follows from triangle inequality, as there is a \(y \in A_1\) such that \(d(a_n, y) < 1\), and this \(y\) is within some \(B_1(y_i)\) for some \(i\), so \(a_n \in B_2(y_i)\).
In particular, since there are finitely many \(2\)-radius balls, by the pigeonhole principle, there exists a subsequence \((a_{k_n})\) such that \((a_{k_n}) \subseteq B_2(y_i)\) for some \(i\). By triangle inequality, we have for all \(m, n \geq 1\), \(d(a_m, a_n) < 4\).
We continue this process iteratively for all \(N \geq 1\), considering \(A_{1/N}\), extracting a subsequence from the previous subsequence, and by the diagonal argument, we find a subsequence, rename it \((a_{k_n})\), such that for all \(m, n \geq N\), \(d(a_{k_m}, a_{k_n}) < 4/N\). In particular, this sequence is Cauchy, so by completeness, it converges. Since the sequence is also in \(A\), we get its limit is in \(\bar{A}\). Coming back to our original sequence, we can take \((x_{k_n})\), which must converge to the same limit, and thus, we get that \(\bar{A}\) is (sequentially) compact. \(\square\)
Remark. One can again use Heine-Borel. Completeness is given, and total-boundedness can be shown through a \(\varepsilon/3\) type argument.
- (Walter Rudin, pg. 44, #13) Construct a compact set \(A\) of \(\mathbb{R}\) such that \(A'\) is countable.
Proof.
One such set is \(A = \{0\} \cup \{1/n + 1/k : n\geq 1, k \geq n\}\). One can find the confirmation of this construction here.
- (Walter Rudin, pg. 44, #16) Consider the metric space \((\mathbb{Q}, |\cdot|)\). Let \(E = \{x \in \mathbb{Q} : 2 < x^2 < 3 \}\). Show \(E\) is closed and bounded, but not compact. Determine if \(E\) is open in \(\mathbb{Q}\).
Proof.
See here.
- (Walter Rudin, pg. 100, #18) Every rational \(x\) can be written in the form of \(x = m/n\) where \(n>0\), and \(m,n\) are relatively prime. When \(x=0\), we let \(n=1\). Consider the function \(f\) defined on \(\mathbb{R}\) by: \[ f(x) = \begin{cases} 0 & \text{if } x \text{ irrational} \\ 1/n & \text{if } x = m/n \end{cases} \] Prove that \(f\) is continuous at every irrational point, and that \(f\) has a simple discontinuity at every rational point.
Proof.
- Suppose \(f: E \rightarrow Y\) is uniformly continuous, where \(E \subseteq \mathbb{R}^k\) and \(Y\) is a metric space.
- If \(E\) is bounded in \(\mathbb{R}^k\), prove that \(f(E)\) is bounded in \(Y\).
- Is the statement true if \(\mathbb{R}^k\) is replaced by an arbitrary metric space \((X,d)\)?
Proof.
First observe that \(E\) bounded \(\implies\) \(\bar{E}\) closed and bounded in \(\mathbb{R}^k\), so \(\bar{E}\) is compact. Thus, it is totally bounded, and it holds that \(E\) itself is totally bounded (in \(\mathbb{R}^k\)) as well. Now we fix \(\varepsilon = 1\). By uniform continuity of \(f\), we get a \(\delta > 0\) such that for all \(p,q \in E\), if \(d_E(p,q) < \delta\), then \(d_Y(f(p), f(q)) < 1\) (1). By total boundedness, we can cover \(E\) with \(\{B_\delta(x_i)\}_{i=1}^n\), where \(n \in \mathbb{N}, x_i \in E\). We claim \(\{B_1(f(x_i)\}_{i=1}^n\) is a finite cover for \(f(E)\), thus showing it is bounded. Indeed, let \(y \in f(E)\), so \(f^{-1}(y) \subseteq E\). Thus, for all \(x \in f^{-1}(y), x \in B_\delta(x_i)\) for some \(i\), so \(d_E(x, x_i) < \delta\). By (1), we get \(d_Y(f(x), f(x_i)) < 1\), and since \(f(x) = y\), we have \(y \in B_1(x_i) \subseteq \{B_1(f(x_i)\}_{i=1}^n\). \(\square\)
- No, and we exploit that fact that boundedness may not imply total-boundedness in metric spaces other than \(\mathbb{R}^k\). Consider \(f:(\mathbb{R}, d_0) \rightarrow (\mathbb{R}, |\cdot|)\), where \(d_0\) is the discrete metric, and \(f(x) = x\). Set \(E = \mathbb{R}\). We have that \(E\) is bounded in \((\mathbb{R}, d_0)\) (take \(M = 1\)) and \(f\) is uniformly continuous (indeed, for any \(\varepsilon\), one can simply choose \(\delta = 1/2\)). But clearly, \(f(E) = \mathbb{R}\) is not bounded in \((\mathbb{R}, |\cdot|)\).
- Show that if \(E\) is open and connected in \(\mathbb{R}^k\), then \(E\) is pathwise connected in \(\mathbb{R}^k\).
Proof.
Let \(x \in E\). Define \(A = \{a \in E : \text{there exists a path from $x$ to $a$}\}\). We show \(A\) is “clopen” (closed and open) and nonempty, and by connectedness, \(A = E\), showing path connectedness.
(Nonempty) \(A\) is nonempty as \(x \in A\).
(Open) Let \(a \in A\), so there is a path from \(x\) to \(a\). Since \(E\) open, there is also an \(r > 0\) such that \(B_r(a) \subseteq E\) (we show it also \(\subseteq A\)). Let \(z \in B_r(a)\). We can define a path from \(a\) to \(z\) by considering \(g:[0,1] \rightarrow E\) given by \(g(t) = \vec{a} + t(\vec{z} - \vec{a})\) (i.e. the linear interpolation, which is continuous). Then, stitching this together with our path from \(x\) to \(a\) gives us a path from \(x\) to \(z\), and thus, \(z \in A\). So \(B_r(a) \subseteq A\), and \(A\) is open.
(Closed) Consider the complement \(A^C = B = \{b \in E : \text{there does not exist a path from $x$ to $b$}\}\). Let \(b \in B \subseteq E\) open, so there is an \(r > 0\) such that \(B_r(b) \subseteq E\). Let \(z \in B_r(b)\). Then by before, we know there is a path from \(b\) to \(z\), but if there were to exist a path from \(x\) to \(z\), then we would also have a path from \(x\) to \(b\), contradicting \(b \in B\). Thus, \(z\) is also in \(B\), and so \(B = A^C\) is open, so \(A\) is also closed.
We conclude \(A = E\), and \(E\) is path-connected. \(\square\)
- Let \(f: [a,b] \rightarrow \mathbb{R}\) be continuous. Suppose that \(f\) has a local maximum at \(x_1\) and \(x_2\), with \(x_1 < x_2\). Show that there must be a third point between \(x_1\) and \(x_2\) where \(f\) has a local minimum.
Proof.
First restrict \(f\) to the compact interval \([x_1, x_2]\). Thus, \(f\) attains a minimum \(x_3 \in [x_1, x_2]\) such that \(f(x_3) = \min\{ f(x) : x \in [x_1, x_2]\}\). Now, we just remove the endpoints. Since \(x_1\) is a local max, we have that there exists an \(\varepsilon >0\) such that if \(| y - x_1| < \varepsilon\), then \(f(y) \leq f(x_1)\). Furthermore, restrict this \(\varepsilon\) so that \(x_1+\varepsilon < x_2\). We have two cases:
Case 1: for all such \(y > x_1\), \(f(y) = f(x_1)\) (i.e. \(f\) is constant on this small interval). Then \(f\) is constant on \([x_1, x_1 + \varepsilon)\), and namely, take \(x^\star = x_1 + \varepsilon/2 (< x_2)\), and thus, for all \(y\) such that \(|y - x^\star| < \varepsilon/2\), \(f(y) \geq f(x^\star)\), and thus, it is a local min in \((x_1, x_2)\).
Case 2: there exists \(y > x_1\) such that \(f(y) < f(x_1)\). Namely, \(x_1\) cannot be a min on \([x_1, x_3]\), so \(x_1 \neq x_3\).
By a symmetric argument, assuming case 2 for both \(x_1\) and \(x_2\), we get that \(x_3\) cannot equal \(x_1, x_2\), and \(x_3 \in (x_1, x_2)\). Then taking \(\varepsilon = \min\{|x_1 - x_3|, |x_2 - x_3|\}\) confirms \(x_3\) is a local min. \(\square\).
- Let \(f : (0,1) \rightarrow \mathbb{R}\) be differentiable and let \(c \in (0,1)\). Suppose that \(\lim_{x \rightarrow c} f'(x)\) exists and is finite. Show that this limit must be equal to \(f'(c)\).
Proof.
Suppose for contradiction that \(\lim_{x \rightarrow c} f'(x) = L \neq f'(c)\). Assume first that \(L < f'(c)\) (a similar argument can be done for \(>\)). Select \(\varepsilon = \frac{f'(c) - L}{2} > 0\). By limit definition, there exists a \(\delta >0\) such that for all \(x \in (0,1)\), if \(|x-c| < \delta\), then \(|f'(x) - L| < \frac{f'(c) - L}{2} \implies f'(x) < \frac{f'(c) + L}{2}\). Let \(x_0 < c\) such that \(|x_0 - c| < \delta\). Thus, we have for all \(x \in [x_0, c)\), \(f'(x) < \frac{f'(c) + L}{2} < f'(c)\) (1). Then consider the open interval \((f'(x_0), f'(c))\) which \(\frac{f'(c) + L}{2}\) is in. Let \(y (\frac{f'(c) + L}{2}, f'(c))\). By Intermediate Value Theorem, there exists an \(x \in (x_0, c)\) such that \(f'(x) = y\), but this contradicts (1). \(\square\)
- Show that the function \[ f(x) = \sum_{n=0}^\infty \frac{1}{2^n} \sin (3^nx)\] is a) continuous in \(\mathbb{R}\), and b) not differentiable anywhere in \(\mathbb{R}\).
Proof.
Let \(c \in \mathbb{R}\). Let \((x_k) \subseteq \mathbb{R}\) such that \(x_k \rightarrow c\). Define the partial sums \(f_N(x) = \sum_{n=0}^N \frac{1}{2^n} \sin (3^nx)\). Since \(\sin\) is continuous, and linear combinations of continuous functions are continuous, we have \(f_N\) continuous for all \(N\). (Remark: If one has the tools of functional analysis, one can prove \(f_N\) is Cauchy in \((C(\mathbb{R}), d_\infty)\), and thus uniformly converges to \(f\), so \(f\) is continuous.)
For now, let \(\varepsilon > 0\). First note: \[ |f(x) - f_N(x)| = |\sum_{n=N+1}^\infty \frac{1}{2^n} \underbrace{\sin (3^nx)}_{\leq 1}| \leq \sum_{n=N+1}^\infty \frac{1}{2^n} < \infty \] and by Cauchy, there exists an \(N \geq 1\) such that \(\sum_{n=N+1}^\infty \frac{1}{2^n} < \varepsilon/3\). Also, since \(f_N\) is continuous, we have that \(f_N(x_k) \rightarrow_{k \rightarrow \infty} f_N(c)\), and so there exists some \(k_\varepsilon \in \mathbb{N}\) such that for all \(k \geq k_\varepsilon\), \(|f_N(x_k) - f_N(c)| < \varepsilon/3\). By triangle inequality, we see for all \(k \geq k_\varepsilon\), \[ |f(x_k) - f(c)| \leq \underbrace{|f(x_k) - f_N(x_k)|}_{\text{Cauchy convergence}} + \underbrace{|f_N(x_k) - f_N(c)|}_{\text{continuity}} + \underbrace{|f_N(c) - f_(c)|}_{\text{Cauchy convergence}} < \varepsilon \] and so \(f(x_k) \rightarrow_{k \rightarrow \infty} f(c)\), showing continuity of \(f\) at any \(c \in \mathbb{R}\). \(\square\)
- This one makes me sad. I’ll see if I have the motivation to write up a proper solution to this one sometime. \(\square\)
- Suppose \(f: [0,1] \rightarrow \mathbb{R}\) is continuous with \(f(0) = f(1) = 0\), and \(f\) is differentiable on \((0,1)\). Show that for any \(\lambda > 0\), there is an \(x \in (0,1)\) such that \(f'(x) = \lambda f(x)\).
Proof.
Let \(\lambda > 0\). We define \(g(t) = e^{-\lambda t}f(t)\), and so we also have \(g(0) = g(1) = 0\). By Mean Value Theorem, there is an \(x \in (0,1)\) such that \(g'(x)\cdot (1-0) = g(1) - g(0)\), and so \(g'(x) = 0\). By product rule, we thus have: \[g'(x) = e^{-\lambda t}f'(x) - \lambda e^{-\lambda t}f(x) = \underbrace{e^{-\lambda x}}_{>0}(f'(x) - \lambda f(x)) = 0\] \[ \implies f'(x) - \lambda f(x) = 0\] and so \(f'(x) = \lambda f(x)\). \(\square\)
- (“Reverse MVT”) Suppose \(f: [0,1] \rightarrow \mathbb{R}\) is continuous, and \(f\) is differentiable on \((0,1)\).
- Show that for any \(0 < c < 1\) such that \(f'(c)\) is not a max or min of \(f'\) in \((0,1)\), there are \(x_1, x_2 \in (0,1)\) such that \[ f'(c) = \frac{f(x_2) - f(x_1)}{x_2 - x_1}.\]
- Is (a) false if \(f'(c)\) is the maximum of \(f'\) in \((0,1)\)? Give an example.
Proof.
Let \(c \in (0,1)\) such that \(f'(c)\) is not a max or min of \(f'\) in \((0,1)\). We define \(g(x) := f(x) - f'(c)\cdot x\), and thus, it suffices to find \(x_1 \neq x_2 \in (0,1)\) such that \(g(x_1) = g(x_2)\). Suppose for contradiction that for all \(x_1 \neq x_2\), we have \(g(x_1) \neq g(x_2)\), so namely, \(g\) is injective. Note also that \(g\) is continuous, as \(f\) is continuous. Thus, we have that \(g\) is monotone, and so we only have one of the following hold for all \(x \in (0,1)\): \(g'(x) \geq 0\) xor \(g'(x) \leq 0\). Furthermore, because \(c\) was chosen so that is was not an extremum of \(f'\), we have the there exist \(a,b \in (0,1)\) such that \(f'(a) < f'(c) < f'(b)\). But then note: \[ g'(a) = f'(a) - f'(c) < 0 \text{ and } g'(b) = f'(b) - f'(a) > 0 \] which contradicts monotonicty. \(\square\)
- Yes, and the proof in (a) breaks down when we try to find \(a,b\) as before. For an example, consider \(f(x) = -(x-0.5)^3\), \(f:[0,1] \rightarrow \mathbb{R}\). We have \(f\) continuous and differentiable (\(f' = -3(x-0.5)^2\)). But then consider \(x = 0.5\), which indeed, maximizes \(f'\) on \((0,1)\). However, we are unable to find \(x_1 \neq x_2\) such that \(f'(0.5) = 0 = \frac{f(x_2) - f(x_1)}{x_2 - x_1}\), or equivalently, \(f(x_1) = f(x_2)\), due to the fact that \(f\) is injective.
- Let \(F: \mathbb{R} \rightarrow \mathbb{R}\) be Lipschitz with constant \(L > 0\). Let \(f, g : [0, \infty) \rightarrow \mathbb{R}\) be continuous on \([0, \infty)\) and differentiable in \((0, \infty)\), and satisfy: \[f'(t) = F(f(t)), \quad g'(t) \leq F(g(t)) \quad \text{ for } t \in (0, \infty).\] Show that if \(g(0) \leq f(0)\), then \(g(t) \leq f(t)\) for all \(t >0\).
Proof.
Define \(h(t) := g(t) - f(t)\). Suppose for contradiction there is a \(t_1 \in (0, \infty)\) such that \(g(t_1) > f(t_1)\), so \(h(t_1)> 0\). If \(f(0) = g(0)\), we have \(h(0) = 0\), and if we have \(f(0) > g(0)\), we have \(h(0) > 0\). Note also that \(h\) continuous, so by IVT, there is a \(t \in (0, t_1)\) such that \(h(t) = 0\). Since there exists some \(t < t_1\) such that \(h(t) = 0\), choose \(t_0\) to be the largest of these values. In particular, we have for all \(t in (t_0, t_1)\), \(h(t) > 0\) (1). By Lipschitz, we have for all \(t \in [t_0, t_1)\), \[ |h'(t)| = |g'(t) - f'(t)| \leq |F(g(t)) - F(f(t))| \leq L(\underbrace{g(t) - f(t)}_{h(t)\geq 0}) = L\cdot h(t)\] so in particular, we have \(h'(t) \leq L\cdot h(t)\). Thus we “solve” the differential inequality: \[ h'(t) - L\cdot h(t) \leq 0\] \[\implies e^{-Lt}h'(t) - Le^{-Lt}h(t) \leq 0\] \[\implies (e^{-Lt}h(t))' \leq 0 \quad \forall t \in [t_0, t_1).\] Let \(H(t) = e^{-Lt}h(t)\), so namely, \(H(t)\) is decreasing on this interval, so we must have \(H(t_0) \geq H(t)\), and so \(h(t_0) \geq h(t)\) and thus, \(f(t) \geq g(t)\) for all \(t \in [t_0, t_1)\), which contradicts (1). \(\square\)
- Let \(f, g\) with \(n\)-th differentiable in \((0,1)\), and suppose that for some \(c \in (0,1)\), we have \(f(c) = f'(c) = \ldots = f^{(n-1)}(c) = 0\) and \(g(c) = g'(c) = \ldots = g^{(n-1)}(c) = 0\), but that \(g^{(n)}(x)\) is never \(0\) in \((0,1)\).
- Show that \(g^{(k)}(x)\) is not zero for \(x\) sufficiently close to \(c\) for \(0 \leq k \leq n-1\).
- Show that \[ \lim_{x\rightarrow c} \frac{f(x)}{g(x)} = \frac{f^{(n)}(c)}{g^{(n)}(c)},\] if \(g(x) \neq 0\) for \(x \neq c\). Indicate where (a) is used.
Proof.
Last Updated: 5/24/2025, more updates coming, I promise :)