# Lectures on Differential Forms — MTH 429H

We now turn to the study of hyper-surfaces in ${\mathbb{R}^{d}}$. A hyper-surface is something like a curve or a surface in ${\mathbb{R}^{3}}$ or their higher dimensional analogues. A differential form is the something we can integrate over a hyper-surface.

For example, a parameterized curve in ${\mathbb{R}^{3}}$ is a map ${\gamma:[0,1]\rightarrow\mathbb{R}^{3}}$, we will call this a ${C^{1}}$ curve if ${\gamma}$ is ${C^{1}}$. Given such a curve with ${\gamma\left([0,1]\right)\subset U}$ and a function ${\mathbf{F}:U\rightarrow\mathbb{R}^{d}}$ we can form the integral of “${\mathbf{F}(\mathbf{x})\cdot d\mathbf{x}}$ along ${\gamma}$:”

$\displaystyle \int_{\gamma}\mathbf{F}\left(\mathbf{x}\right)\cdot d\mathbf{x}:=\int_{0}^{1}\mathbf{F}\left(\gamma(t)\right)\cdot\gamma'(t)dt.$

We will call the expression

$\displaystyle \mathbf{F}\left(\mathbf{x}\right)\cdot d\mathbf{x}=\sum_{j=1}^{d}F_{j}(\mathbf{x})dx_{j}$

a differential one form. We will generalize this notion to make differential ${j}$-forms and integrate these over hypersurfaces of dimension ${j}$.

Hypersurfaces

Definition 1 A parameterized ${j}$-surface in ${\mathbb{R}^{d}}$ is a ${C^{1}}$ map ${\Phi:V\rightarrow\mathbb{R}^{d}}$ where ${V\subset\mathbb{R}^{j}}$ is open. If ${\Phi(V)\subset U}$ with ${U\subset\mathbb{R}^{d}}$ an open set, we say that ${\Phi}$ is a parameterized ${j}$-surface in ${U}$.

Remark 2 We will call ${\Phi}$ a ${j}$-surface for short. In principle we should define a ${j}$-surface as an equivalence class of parameterized ${j}$-surface where we make ${\Phi_{1}\sim\Phi_{2}}$ if ${\Phi_{1}=\Phi_{2}\circ T}$ with ${T}$ a one-to-one ${C^{1}}$ map with ${\det T'(\mathbf{x})\neq0}$ at all points of it’s domain. Think of a curve, say ${\Phi_{1}(t)=(\cos t,\sin t)\in\mathbb{R}^{2}}$ where ${t\in[0,2\pi]}$. This is the closed unit circle in ${\mathbb{R}^{2}}$. As a parametrized curve this is distinct from ${\Phi_{2}(t)=\left(\cos2t,\sin2t\right)}$ defined on the domain ${[0,\pi]}$, but we see that the difference is simply one of reparameterizing the domain. We should keep this in mind, but for the sake of simplicity we will refer to parameterized ${j}$-surfaces simply as ${j}$-surfaces.

Differential forms

Definition 3 Let ${U\subset\mathbb{R}^{d}}$ be an open set and ${0\le j\le d}$ an integer. A differential ${j}$-form is a function ${\omega}$ from ${U}$ to the space ${\Lambda^{j}\left(\mathbb{R}^{d}\right)}$ of alternating ${j}$-forms on ${\mathbb{R}^{d}}$. If ${\omega\in C^{\alpha}(U,\Lambda^{j}\left(\mathbb{R}^{d}\right))}$ with ${\alpha}$ a non-negative integer we say that ${\omega}$ is a ${C^{\alpha}}$ ${j}$form or that ${\omega}$ is of class ${C^{\alpha}}$ (for ${\alpha=0}$ we also say that ${\omega}$ is a continuous ${j}$-form).

Remark 4 Since ${\Lambda^{0}\left(\mathbb{R}^{d}\right)=\mathbb{R}}$ by definition, a differential ${0}$-form on ${U}$ is just a real valued function on ${U}$.

We will develop some special notation for ${j}$-forms in a bit. For the moment we will denote the value of ${\omega}$ at a point ${\mathbf{x}\in U}$ by ${\omega\left(\mathbf{x}\right)}$. Note that for each ${\mathbf{x},}$ ${\omega\left(\mathbf{x}\right)}$ is an alternating ${j}$-form, which in particular is a function taking as an argument ${j}$-vectors in ${\mathbb{R}^{d}}$. We will denote the action of ${\omega(\mathbf{x})}$ on the vectors ${\mathbf{v}_{1},\ldots,\mathbf{v}_{j}}$ by

$\displaystyle \omega(\mathbf{x})\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right].$

Remark 5 ${\omega}$ is ${C^{\alpha}}$ if and only if for each ${\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\in\mathbb{R}^{d}}$ the map ${\mathbf{x}\mapsto\omega\left(\mathbf{x}\right)\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right]}$ is ${C^{\alpha}}$.

Lemma 6 Suppose ${\omega}$ is a ${j}$-form on ${U\subset\mathbb{R}^{d}}$ and ${\Phi:V\rightarrow\mathbb{R}^{d}}$ is a parameterized ${k}$-surface. Then

$\displaystyle \omega_{\Phi}\left(\mathbf{x}\right)\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right]:=\omega\left(\Phi\left(\mathbf{x}\right)\right)\left[\Phi'\left(\mathbf{x}\right)\mathbf{v}_{1},\ldots,\Phi'\left(\mathbf{x}\right)\mathbf{v}_{j}\right].$

is a ${j}$-form on ${V}$, which is continuous if ${\omega}$ is.

Proof: That ${\omega_{\Phi}}$ is a differential ${j}$-form follows easily from the fact that ${\omega}$ is. The continuity follows easily from the continuity of ${\Phi}$ and ${\Phi'}$.$\Box$

Definition 7 Let ${\omega\in C(U,\Lambda^{j}\left(\mathbb{R}^{d}\right))}$ be a continuous ${j}$-form, let ${\Phi:V\rightarrow\mathbb{R}^{d}}$ be a parameterized ${j}$-surface in ${U}$ and let ${\mathbf{s}_{1},\ldots,\mathbf{s}_{j}}$ be the standard basis of ${\mathbb{R}^{j}}$. Then the integral of ${\omega}$ over ${\Phi}$ is

$\displaystyle \int_{\Phi}\omega:=\int_{V}\omega_{\Phi}\left(\mathbf{x}\right)\left[\mathbf{s}_{1},\ldots,\mathbf{s}_{j}\right]d\mathbf{x} \ \ \ \ \ (1)$

whenever the integral on the right hand side is defined.

Remark 8 1) For the most part, it will suffice to consider ${\int_{\Phi}\omega}$ only in case ${\omega_{\Phi}}$ has compact support in ${V}$. (Here the support of a ${j}$-form is the closure of the set on which it is not identically zero.) In that case the integral on the right hand side of (1) is well defined, since we may continuously extend ${\omega_{\Phi}}$ to all of ${\mathbb{R}^{j}}$ by taking it to be zero off of ${V}$ and set

$\displaystyle \int_{_{V}}\omega_{\Phi}\left(\mathbf{x}\right)\left[\mathbf{s}_{1},\ldots,\mathbf{s}_{j}\right]=\int_{\mathbb{R}^{j}}\omega_{\Phi}\left(\mathbf{x}\right)\left[\mathbf{s}_{1},\ldots,\mathbf{s}_{j}\right]d\mathbf{x}.$

2) Sometimes it is useful to consider a ${j}$-surface defined on a ${j}$-cell ${I}$ in ${\mathbb{R}^{j}}$. If ${\Phi}$ extends continuously to the boundary points of ${I}$ then the integral on the right hand side of (1) is well defined as an iterated integral. 3) For a ${k}$-form ${\omega}$ in ${\mathbb{R}^{k}}$ we will simply write

$\displaystyle \int_{I^{k}}\omega:=\int_{I^{k}}\omega\left(\mathbf{x}\right)\left[\mathbf{e}_{1},\ldots,\mathbf{e}_{k}\right]d\mathbf{x}$

with ${\mathbf{e}_{1},\ldots,\mathbf{e}_{k}}$ the standard basis and ${I^{k}}$ a ${k}$-cell. This amounts to identifying the ${k}$-cell with the ${k}$-surface given by the identity map.

Why is (1) a good definition? The answer comes from considering change of variables. Indeed, suppose ${\Phi_{1}}$ and ${\Phi_{2}}$ are parameterized ${j}$ surfaces in ${U}$ with ${\Phi_{1}=\Phi_{2}\circ T}$ where ${T}$ is a one-to-one map with ${\det T'\left(\mathbf{x}\right)\neq0}$ everywhere. Then

$\displaystyle \begin{array}{rcl} \omega_{\Phi_{1}}\left(\mathbf{x}\right)\left[\mathbf{s}_{1},\ldots,\mathbf{s}_{j}\right] & = & \omega\left(\Phi_{2}\left(T\left(\mathbf{x}\right)\right)\right)\left[\Phi_{2}'\left(T\left(\mathbf{x}\right)\right)T'\left(\mathbf{x}\right)\mathbf{s}_{1},\ldots,\Phi_{2}'\left(T\left(\mathbf{x}\right)\right)T'\left(\mathbf{x}\right)\mathbf{s}_{j}\right]\\ & = & \omega_{\Phi_{2}}\left(T\left(\mathbf{x}\right)\right)\left[T'\left(\mathbf{x}\right)\mathbf{s}_{1},\ldots,T'\left(\mathbf{x}\right)\mathbf{s}_{j}\right]\\ & = & \det T'\left(\mathbf{x}\right)\omega_{\Phi_{2}}\left(T\left(\mathbf{x}\right)\right)\left[\mathbf{s}_{1},\ldots,\mathbf{s}_{j}\right]. \end{array}$

If the domain ${V_{1}}$ of ${\Phi_{1}}$ is connected then we either have ${\det T'\left(\mathbf{x}\right)>0}$ or ${\det T'\left(\mathbf{x}\right)<0}$ for each ${\mathbf{x}\in V_{1}}$. Thus, by the change of variables formula,

$\displaystyle \begin{array}{rcl} \int_{\Phi_{1}}\omega & = & \int_{V_{1}}\omega_{\Phi_{1}}\\ & = & \int_{V1}\det T'\left(\mathbf{x}\right)\omega_{\Phi_{2}}\left(T\left(\mathbf{x}\right)\right)\left[\mathbf{s}_{1},\ldots,\mathbf{s}_{j}\right]\\ & = & \pm\int_{V_{2}}\omega_{\Phi_{2}}=\pm\int_{\Phi_{2}}\omega, \end{array}$

with the ${\pm}$ sign corresponding to the sign of ${\det T'\left(\mathbf{x}\right)}$.

What does this ${\pm}$ sign mean? Remember that for the fundamental theorem of calculus in ${d=1}$ it was useful to define ${\int_{b}^{a}f(x)dx=-\int_{a}^{b}f(x)dx}$ for ${a, thus making the integral into an oriented integral that depends on the orientation of the interval. In the same way, a ${j}$-surface in ${\mathbb{R}^{d}}$ has two possible orientations and switching orientations results in a minus sign. You may have encountered this in vector calculus when integrating over ${2}$ surfaces in ${\mathbb{R}^{3}}$ — there are two choices of unit normal vectors and the choice determines the sign of integrals like ${\int_{\Sigma}\mathbf{F}\left(\mathbf{x}\right)\cdot\hat{\mathbf{n}\left(\mathbf{x}\right)}d\sigma\left(\mathbf{x}\right)}$.

Wedge product and elementary forms

Definition 9 Given a differential ${j}$-form ${\omega}$ and a differential ${k}$-form ${\nu}$ we define a ${j+k}$ from ${\omega\wedge\nu}$ by

$\displaystyle \omega\wedge\nu\left(\mathbf{x}\right)\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{j+k}\right]=\frac{1}{j!k!}\sum_{\sigma}\mathrm{sgn}\sigma\ \omega\left(\mathbf{x}\right)\left[\mathbf{v}_{\sigma(1)},\ldots,\mathbf{v}_{\sigma(j)}\right]\nu\left(\mathbf{x}\right)\left[\mathbf{v}_{\sigma(j+1)},\ldots,\mathbf{v}_{\sigma(j+k)}\right], \ \ \ \ \ (2)$

where the sum runs over all permutations of ${\left\{ 1,\ldots,j+k\right\} }$.

Remark 10 Note that given a ${j}$-form ${\omega}$ and a ${j'}$-form ${\nu}$ a on an open set ${U\subset\mathbb{R}^{d}}$. Given a parameterized ${k}$-surface ${\Phi:V\rightarrow U}$ we have

$\displaystyle \left[\omega\wedge\nu\right]_{\Phi}=\omega_{\Phi}\wedge\nu_{\Phi}.$

Remark 11 The “wedge” product of a zero form ${f}$ and a ${j}$-form ${\omega}$ is the ${j}$-form

$\displaystyle f\wedge\omega(\mathbf{x})\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right]=f\left(\mathbf{x}\right)\omega\left(\mathbf{x}\right)\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right].$

We will typically denote this product as just ${f\omega}$.

Exercise 12 Let ${\omega}$ be a ${j}$-form , let ${\nu}$ be a ${k}$-form and let ${\alpha}$ be an ${l}$-form. Show that ${\omega\wedge\nu=(-1)^{kj}\nu\wedge\omega}$ and that ${\left(\omega\wedge\nu\right)\wedge\alpha=\omega\wedge\left(\nu\wedge\alpha\right).}$

Exercise 13 Let ${\omega_{1}}$ and ${\omega_{2}}$ be ${j}$-forms and let ${\nu}$ be an ${k}$-form. Show that ${\left(\omega_{1}+\omega_{2}\right)\wedge\nu=\omega_{1}\wedge\nu+\omega_{2}\wedge\nu.}$

To proceed it is useful to introduce a notation for forms. First we define the elementary ${1}$ forms ${dx_{1},\ldots,dx_{d}}$ on ${\mathbb{R}^{d}}$ by

$\displaystyle dx_{j}\left(\mathbf{v}\right)=v_{j},$

so ${dx_{j}\left(\mathbf{e}_{i}\right)=0}$ if ${i\neq j}$ and ${=1}$ if ${i=j}$. The for each ${j}$ we define the elementary ${j}$ forms to be the following

$\displaystyle dx_{\alpha_{1}}\wedge dx_{\alpha_{2}}\wedge\cdots\wedge dx_{\alpha_{j}}$

where ${\alpha_{1},\alpha_{2},\cdots,\alpha_{j}\in\left\{ 1,\ldots,d\right\} }$ — this makes sense without parentheses by exercise 12. For instance

$\displaystyle dx_{\alpha}\wedge dx_{\beta}\left[\mathbf{v}_{1},\mathbf{v}_{2}\right]=v_{1,\alpha}v_{2,\beta}-v_{1,\beta}v_{2,\alpha}.$

Note that if ${\alpha_{i}=\alpha_{i'}}$ for any ${i\neq i'}$ then the resulting form is zero and more generally that

$\displaystyle dx_{\alpha_{\sigma(1)}}\wedge\cdots\wedge dx_{\alpha_{\sigma(j)}}=\mathrm{sgn}\sigma dx_{\alpha_{1}}\wedge\cdots\wedge dx_{\alpha_{\sigma(j)}}.$

for any permutation ${\sigma}$ of ${\left\{ 1,\ldots,j\right\} }$. Hence, up to sign, there are only ${{d \choose j}}$ elementary ${j}$ forms, given by

$\displaystyle dx_{\alpha_{1}}\wedge\cdots\wedge dx_{\alpha_{j}}$

where ${1\le\alpha_{1}<\alpha_{2}<\cdots<\alpha_{j}\le d}$.

Let us introduce a compact notation for these elementary ${j}$-forms. Given ${S\subset\left\{ 1,\ldots,d\right\} }$ we can write its elements in increasing order ${\alpha_{1}<\cdots<\alpha_{j}}$ where ${\#S=j}$. Let

$\displaystyle dx_{S}:=dx_{\alpha_{1}}\wedge\cdots\wedge dx_{\alpha_{j}}.$

Note that given ${S\subset\left\{ 1,\ldots,d\right\} }$ of size ${j}$,

$\displaystyle dx_{S}\left(\mathbf{e}_{\alpha_{1}},\ldots,\mathbf{e}_{\alpha_{j}}\right)=\begin{cases} 0 & S\neq\left\{ \alpha_{1},\ldots,\alpha_{j}\right\} \\ \mathrm{sgn}\sigma & S=\left\{ \alpha_{1},\ldots,\alpha_{j}\right\} \end{cases}$

with ${\sigma}$ is the unique permutation of ${\left\{ 1,\ldots,j\right\} }$ that puts ${\alpha_{\sigma(1)},\ldots,\alpha_{\sigma(j)}}$ in increasing order.

Theorem 14 Let ${\omega}$ be a differential ${j}$-form on an open set ${U\subset\mathbb{R}^{d}}$ then to each subset ${S\subset\left\{ 1,\ldots,d\right\} }$ of size ${j}$ there is a function ${f_{S}}$ on ${U}$ such that

$\displaystyle \omega(\mathbf{x})=\sum_{S}f_{S}\left(\mathbf{x}\right)dx_{S}.$

Furthermore ${\omega}$ is ${C^{\alpha}}$ if and only if the functions ${f_{S}\in C^{\alpha}}$ for each ${S}$.

Proof: Given ${S}$ of size ${j}$ with ${S=\left\{ \alpha_{1}<\alpha_{2}<\cdots<\alpha_{j}\right\} }$ let

$\displaystyle f_{S}\left(\mathbf{x}\right)=\omega(\mathbf{x})\left[\mathbf{e}_{\alpha_{1}},\ldots,\mathbf{e}_{\alpha_{j}}\right].$

Clearly if ${\omega}$ is ${C^{\alpha}}$ then ${f}$ is also. Furthermore

$\displaystyle \begin{array}{rcl} \omega\left(\mathbf{x}\right)\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right] & = & \sum_{\beta_{1}\ldots,\beta_{j}=1}^{d}v_{1;\beta_{1}}\cdot\cdots\cdot v_{j;\beta_{j}}\omega(\mathbf{x})\left[\mathbf{e}_{\beta_{1}},\ldots,\mathbf{e}_{\beta_{j}}\right]. \end{array}$

Now for each ${\beta_{1},\ldots,\beta_{j}}$ we have

$\displaystyle \omega(\mathbf{x})\left[\mathbf{e}_{\beta_{1}},\ldots,\mathbf{e}_{\beta_{j}}\right]=f_{S}\left(\mathbf{x}\right)dx_{S}\left(\mathbf{e}_{\beta_{1}},\ldots,\mathbf{e}_{\beta_{j}}\right)$

where ${S=\left\{ \beta_{1},\ldots,\beta_{j}\right\} .}$ Note also that

$\displaystyle f_{S'}\left(\mathbf{x}\right)dx_{S'}\left(\mathbf{e}_{\beta_{1}},\ldots,\mathbf{e}_{\beta_{j}}\right)=0$

if ${S'\neq\left\{ \beta_{1},\ldots,\beta_{j}\right\} }$. Thus

$\displaystyle \begin{array}{rcl} \omega(\mathbf{x})\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right] & = & \sum_{\beta_{1},\ldots,\beta_{j}=1}^{d}\sum_{S}v_{1;\beta_{1}}\cdot\cdots\cdot v_{j;\beta_{j}}f_{S}(\mathbf{x})dx_{S}\left(\mathbf{e}_{\beta_{1}},\ldots,\mathbf{e}_{\beta_{j}}\right)\\ & = & \sum_{S}\sum_{\beta_{1},\ldots,\beta_{j}=1}^{d}v_{1;\beta_{1}}\cdot\cdots\cdot v_{j;\beta_{j}}f_{S}(\mathbf{x})dx_{S}\left(\mathbf{e}_{\beta_{1}},\ldots,\mathbf{e}_{\beta_{j}}\right)\\ & = & \sum_{S}f_{S}\left(\mathbf{x}\right)dx_{S}\left(\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right) \end{array}$

as claimed.

Since ${dx_{S}\left(\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right)}$ is constant as a function of ${x}$, it is ${C^{\alpha}}$ so that if ${f_{S}}$ is ${C^{\alpha}}$ for each ${S}$ then ${\omega}$ is too. $\Box$

Definition 15 Let ${\Phi:V\rightarrow\mathbb{R}^{d}}$ be a parameterized ${k}$-surface in ${\mathbb{R}^{d}}$. Let points in ${V}$ be denoted ${\mathbf{y}}$ and points in ${\Phi\left(V\right)}$ be denoted ${\mathbf{x}}$, so ${\mathbf{x}=\Phi\left(\mathbf{y}\right)}$. Given ${\alpha_{1},\ldots,\alpha_{j}\in\left\{ 1,\ldots,d\right\} }$ and ${\beta_{1},\ldots,\beta_{j}\in\left\{ 1,\ldots,k\right\} }$ define the Jacobian

$\displaystyle \frac{\partial(x_{\alpha_{1}},\ldots,x_{\alpha_{j}})}{\partial\left(y_{\beta_{1}},\ldots,y_{\beta_{j}}\right)}:=\det\left(\frac{\partial\Phi_{\alpha_{i}}\left(\mathbf{y}\right)}{\partial y_{\beta_{i'}}}\right)_{i,i'=1}^{j}.$

Given ${S=\left\{ \alpha_{1}<\ldots<\alpha_{j}\right\} \subset\left\{ 1,\ldots,d\right\} }$ and ${S'=\left\{ \beta_{1}<\ldots<\beta_{j}\right\} }$ let ${\frac{\partial x_{S}}{\partial y_{S'}}}$ denote the correspnding Jacobian with ${\alpha}$‘s and ${\beta}$‘s in increasing order.

Proposition 16 Let ${\Phi}$ be a parameterized ${k}$-surface in ${\mathbb{R}^{d}}$ and ${\omega=\sum_{S}f_{S}dx_{S}}$ a ${j}$-form defined on a neighborhood of ${\Phi}$. Then

$\displaystyle \omega_{\Phi}\left(\mathbf{y}\right)=\sum_{S'}\left(\sum_{S}f_{S}\left(\Phi\left(\mathbf{y}\right)\right)\frac{\partial x_{S}}{\partial y_{S'}}\right)dy_{S'}.$

Proof: Let ${\mathbf{s}_{1},\ldots,\mathbf{s}_{k}}$ denote the standard basis of ${\mathbb{R}^{k}}$ and ${\mathbf{e}_{1},\ldots,\mathbf{e}_{d}}$ the standard basis of ${\mathbb{R}^{d}}$. Then

$\displaystyle \Phi'\left(\mathbf{y}\right)\mathbf{s}_{\beta}=\sum_{\alpha=1}^{d}\frac{\partial\Phi_{\alpha}\left(\mathbf{x}\right)}{\partial y_{\beta}}\mathbf{e}_{\alpha}.$

Thus given ${S'=\left\{ \beta_{1}<\ldots<\beta_{j}\right\} \subset\left\{ 1,\ldots,k\right\} }$,

$\displaystyle \begin{array}{rcl} \omega_{\Phi}\left(\mathbf{y}\right)\left[\mathbf{s}_{\beta_{1}},\ldots,\mathbf{s}_{\beta_{j}}\right] & = & \omega\left(\Phi\left(\mathbf{y}\right)\right)\left[\Phi'\left(\mathbf{y}\right)\mathbf{s}_{\beta_{1}},\ldots,\Phi'\left(\mathbf{y}\right)\mathbf{s}_{\beta_{j}}\right]\\ & = & \sum_{\alpha_{1},\ldots,\alpha_{j}=1}^{d}\frac{\partial\Phi_{\alpha_{1}}}{\partial y_{\beta_{1}}}\cdot\cdots\cdot\frac{\partial\Phi_{\alpha_{j}}}{\partial y_{\beta_{j}}}\omega\left(\Phi\left(\mathbf{y}\right)\right)\left[\mathbf{e}_{\alpha_{1}},\ldots,\mathbf{e}_{\alpha_{j}}\right]\\ & = & \sum_{S}f_{S}\left(\mathbf{x}\right)\sum_{\sigma}\mathrm{sgn}\sigma\frac{\partial\Phi_{\alpha_{\sigma(1)}}}{\partial y_{\beta_{1}}}\cdot\cdots\cdot\frac{\partial\Phi_{\alpha_{\sigma(j)}}}{\partial y_{\beta_{j}}}\\ & = & \sum_{S}f_{S}\left(\mathbf{x}\right)\frac{\partial x_{S}}{\partial y_{S}'}, \end{array}$

where in the second to last line ${S=\left\{ \alpha_{1},\ldots,\alpha_{j}\right\} .}$ Together with the previous theorem this proves the result. $\Box$

A particular case of this result is when ${k=j}$, so for ${\omega=\sum_{S}f_{s}dx_{S}}$ we get the explicit expression

$\displaystyle \int_{\Phi}\omega=\sum_{S}\int_{V}f_{S}\left(\Phi\left(\mathbf{y}\right)\right)\frac{\partial x_{S}}{\partial y_{\{1,\ldots,j\}}}d\mathbf{y}.$

Exterior derivative

Let ${U}$ be an open set in ${\mathbb{R}^{d}}$. If ${f\in C^{1}\left(U\right)}$ we define

$\displaystyle df:=\sum_{j=1}^{d}\frac{\partial f}{\partial x_{j}}dx_{j}.$

So ${df}$ is a continuous ${1}$-form on ${U}$. Similarly if ${\omega=\sum_{S}f_{S}dx_{S}}$ is a ${C^{1}}$ ${j}$-form we define

$\displaystyle d\omega:=\sum_{S}df_{S}\wedge dx_{S}=\sum_{S}\sum_{i}\frac{\partial f_{S}}{\partial x_{i}}dx_{i}\wedge dx_{S}.$

So ${d\omega}$ is a continuous ${j+1}$-form on ${U}$.

Example 17 Let ${\omega=\sum_{j}f_{j}dx_{j}}$ be a ${1}$-form. Then

$\displaystyle d\omega=\sum_{i

Example 18 Let ${\gamma:[0,1]\rightarrow U}$ be a ${1}$-surface in ${U}$, that is a curve in ${U}$. Suppose ${f\in C^{1}(U)}$ then

$\displaystyle \int_{\gamma}df=\int_{\gamma}\sum_{i=1}^{k}\frac{\partial f}{\partial x_{i}}dx_{i}=\int_{0}^{1}\sum_{i=1}^{k}\frac{\partial f}{\partial x_{i}}\left(\gamma(t)\right)\gamma'_{i}\left(t\right)dt=f\left(\gamma(1)\right)-f\left(\gamma(0)\right).$

Lemma 19 Let ${\omega}$ be a ${C^{2}}$ ${j}$-form on an open set ${U\subset\mathbb{R}^{d}}$. Then ${d\left(d\omega\right)=0}$.

Proof: Let ${\omega=\sum_{S}f_{S}dx_{S}}$. So

$\displaystyle \begin{array}{rcl} d\left(d\omega\right) & = & \sum_{S}\sum_{i}\sum_{j}\frac{\partial f}{\partial x_{i}\partial x_{j}}dx_{i}\wedge dx_{j}\wedge dx_{S}\\ & = & \sum_{S}\sum_{i

$\Box$

Lemma 20 Let ${\omega}$ be a ${C^{1}}$ j-form and let ${\nu}$ be a ${C^{1}}$ ${k}$-form, both defined on an open set ${U\subset\mathbb{R}^{d}}$. Then

$\displaystyle d\left(\omega\wedge\nu\right)=\left(d\omega\right)\wedge\nu+\left(-1\right)^{j}\omega\wedge\left(d\nu\right).$

Exercise 21 Prove Lemma 20. (Hint: it is essentially the product rule.)

Note that the notation ${dx_{j}}$ for the elementary one forms is consistent with the exterior derivative: ${dx_{j}}$ is indeed the exterior derivative of the function ${\mathbf{x}\mapsto x_{j}}$.

Theorem 22 Let ${\omega}$ be a ${C^{1}}$ ${j}$-form on an open set ${U\subset\mathbb{R}^{d}}$ and let ${\Phi:V\rightarrow U}$ be a parameterized ${k}$-surface in ${U}$. Then

$\displaystyle \left(d\omega\right)_{\Phi}=d\left(\omega_{\Phi}\right).$

Proof: First consider a zero form, namely a function ${f\in C^{1}\left(U\right)}$. Then ${f_{\Phi}=f\circ\Phi}$ and we see that

$\displaystyle df_{\Phi}\left(\mathbf{y}\right)\left[\mathbf{v}\right]=df\left(\Phi\left(\mathbf{y}\right)\right)\left[d\Phi\left(\mathbf{y}\right)\cdot\mathbf{v}\right]=\left(df\right)_{\Phi}\left(\mathbf{y}\right)\left[\mathbf{v}\right]$

by the chain rule. That is,

$\displaystyle d\left(f_{\Phi}\right)=\left(df\right)_{\Phi} \ \ \ \ \ (3)$

Specializing to a coordinate function ${x_{j}}$ we see that

$\displaystyle \left(dx_{j}\right)_{\Phi}=d\Phi_{j},$

where ${\Phi_{j}}$ denotes the ${j}$-th coordinate function of ${\Phi}$. Since ${\left[\omega\wedge\nu\right]_{\Phi}=\omega_{\Phi}\wedge\nu_{\Phi}}$ (see remark 10) we see that

$\displaystyle \left(dx_{\alpha_{1}}\wedge\cdots\wedge dx_{\alpha_{j}}\right)_{\Phi}=d\Phi_{\alpha_{1}}\wedge\cdots\wedge d\Phi_{\alpha_{j}}$

for any elementary ${j}$-form ${dx_{\alpha_{1}}\wedge\cdots\wedge dx_{\alpha_{j}}}$. In particular, it follows from the product rule (Lemma 20) that

$\displaystyle d\left(dx_{S}\right)_{\Phi}=0 \ \ \ \ \ (4)$

for any elementary ${j}$-form ${dx_{S}}$.

Putting equations (3) and (4) together we find for a general ${j}$-form ${\omega=\sum_{S}f_{S}dx_{S}}$ that

$\displaystyle \begin{array}{rcl} \left(d\omega\right)_{\Phi} & = & \left(\sum_{S}df_{S}\wedge dx_{S}\right)_{\Phi}\\ & = & \sum_{S}\left(df_{S}\right)_{\Phi}\wedge\left(dx_{S}\right)_{\Phi}\\ & = & \sum_{S}d\left(f_{S}\circ\Phi\right)\wedge\left(dx_{S}\right)_{\Phi}\\ & = & d\sum_{S}\left(f_{S}\circ\Phi\right)\left(dx_{S}\right)_{\Phi}\\ & = & d\left(\omega_{\Phi}\right). \end{array}$

$\Box$