# Lectures on Alternating Linear Forms

The next topic we will consider is integration in many variables. In developing the theory we will need a bit of linear algebra. Specifically we will need the notion and basic results about alternating linear forms. One of the first things we can do with this is obtain some basic results about determinants, which you have probably seen before but which we will develop here for completeness.

Permutations

Let ${j}$ be a positive integer and let ${[j]=\left\{ 1,\ldots,j\right\} }$. A permuation of ${[j]}$ is a one-to-one map from ${[j]}$ onto itself. The sign of a permuation is defined to be

$\displaystyle \mathrm{sgn}\sigma=\prod_{1\le i

where ${\mathrm{sgn}(x)=1}$ if ${x>0}$ and ${=-1}$ if ${x<0}$. So ${\mathrm{sgn}\sigma=\pm1}$ for any permuation ${\sigma}$. Let ${\mathcal{P}_{j}}$ denote the set of all permutations of ${\left[j\right]}$. Given two permutations ${\sigma_{1},\sigma_{2}}$ we denote their composition ${\sigma_{2}\circ\sigma_{1}}$ by ${\sigma_{2}\sigma_{1}}$. The set ${\mathcal{P}_{j}}$ is a group under this operation. We have

Theorem 1 Let ${\sigma_{1}}$, ${\sigma_{2}\in\mathcal{P}_{j}}$ then

$\displaystyle \mathrm{sgn}\left(\sigma_{2}\sigma_{1}\right)=\mathrm{sgn}\sigma_{2}\mathrm{sgn}\sigma_{1}.$

That is the map ${\sigma\mapsto\mathrm{sgn}\sigma}$ is a homomorphism from ${\mathcal{P}_{j}}$ onto the multiplicative group ${\left\{ -1,+1\right\} }$.

Proof: Hopefully you have seen this in your algebra course. If not, here is a proof. Note that ${\mathrm{sgn}\sigma=(-1)^{n(\sigma)}}$ where ${n(\sigma)}$ is the number of pairs ${i,i'}$ with ${i but ${\sigma(i)>\sigma(i')}$. If ${i and ${\sigma_{2}\sigma_{1}\left(i\right)>\sigma_{2}\sigma_{1}(i')}$, there are two possibilities: ${\sigma_{1}(i)<\sigma_{1}(i')}$ or ${\sigma_{1}(i)>\sigma_{1}(i')}$. Thus

$\displaystyle \begin{array}{rcl} n(\sigma_{2}\sigma_{1}) & = & \#\left\{ (i,i')\ :\ i\sigma_{2}\sigma_{1}(i')\right\} \\ & & +\#\left\{ (i,i')\ :\ i\sigma_{1}(i')\ \&\ \sigma_{2}\sigma_{1}(i)>\sigma_{2}\sigma_{1}(i')\right\} \\ & = & \#\left\{ (i,i')\ :\ \sigma_{1}(i)<\sigma_{1}(i')\ \&\ \sigma_{2}\sigma_{1}(i)>\sigma_{2}\sigma_{1}(i')\right\} \\ & & -\#\left\{ (i,i')\ :\ i>i'\ \&\ \sigma_{1}(i)<\sigma_{1}(i')\ \&\ \sigma_{2}\sigma_{1}(i)>\sigma_{2}\sigma_{1}(i')\right\} \\ & & +\#\left\{ (i,i')\in P\ :i\sigma_{1}(i')\right\} \\ & & -\#\left\{ (i,i')\ :\ i\sigma_{1}(i')\ \&\ \sigma_{2}\sigma_{1}(i)<\sigma_{2}\sigma_{1}(i')\right\} \\ & = & n(\sigma_{2})+n(\sigma_{1})-\#\left\{ (i,i')\ :\ i>i'\ \&\ \sigma_{1}(i)<\sigma_{1}(i')\ \&\ \sigma_{2}\sigma_{1}(i)>\sigma_{2}\sigma_{1}(i')\right\} \\ & & -\#\left\{ (i,i')\ :\ i\sigma_{1}(i')\ \&\ \sigma_{2}\sigma_{1}(i)<\sigma_{2}\sigma_{1}(i')\right\} \\ & = & n(\sigma_{2})+n(\sigma_{2})-\#\left\{ (i',i)\ :\ i'\sigma_{1}(i)\ \&\ \sigma_{2}\sigma_{1}(i')<\sigma_{2}\sigma_{1}(i)\right\} \\ & & -\#\left\{ (i,i')\ :\ i\sigma_{1}(i')\ \&\ \sigma_{2}\sigma_{1}(i)<\sigma_{2}\sigma_{1}(i')\right\} \\ & = & n(\sigma_{2})+n(\sigma_{2})-2\#\left\{ (i',i)\ :\ i'\sigma_{1}(i)\ \&\ \sigma_{2}\sigma_{1}(i')<\sigma_{2}\sigma_{1}(i)\right\} . \end{array}$

Since the last term is even, we see that ${(-1)^{n(\sigma_{2}\sigma_{1})}=(-1)^{n(\sigma_{2})}(-1)^{n(\sigma_{1})}}$. $\Box$

We will also need the fact that any permuation ${\sigma}$ of ${[j]}$ can be expressed as a product of at most ${j-1}$ distinct transpositions (where the empty product is understood to be the identity permutation). A transposition is a permutation that interchanges two numbers. More precisely, given ${i_{1},i_{2}\in[j]}$ with ${i_{1}\neq i_{2}}$ the transposition ${t_{i_{1},i_{2}}}$ is defined to be the permuation

$\displaystyle t_{i_{1,}i_{2}}(i)=\begin{cases} i & \text{ if }i\neq i_{1},i_{2}\\ i_{2} & \mbox{ if }i=i_{1}\\ i_{1} & \mbox{ if }i=i_{2} \end{cases}.$

One way to prove that any permutation can be written as a product of transpositions is by induction on ${j}$. To begin, the only permutation of ${\left\{ 1\right\} }$ is the identity so the claim is trivial for ${j=1}$. Second suppose the result is known to hold for permuations of ${\left\{ 1,\ldots,j-1\right\} }$ and let ${\sigma}$ be a permuation of ${\left\{ 1,\ldots,j\right\} }$. Let ${i_{1}=\sigma(j)}$ and consider the map ${\sigma'=t_{i_{1},j}\sigma}$. This permuation satisfies ${\sigma'(j)=j}$ so we can think of ${\sigma'}$ as a permutation of ${\left\{ 1,\ldots,j-1\right\} }$. Thus there are transpositions ${t_{1},\ldots,t_{m}}$ of ${\left\{ 1,\ldots,j-1\right\} }$with ${m such that ${\phi'=\prod_{l=1}^{m}t_{l}}$. (We extend ${t_{l}}$ to act on ${\left\{ 1,\ldots,j\right\} }$ by defining ${t_{l}(j)=j}$.) Since ${t^{2}=1}$ for any transposition, we see that ${\sigma=t_{i_{1},j}\prod_{l=1}^{m}t_{l}.}$

Since ${\mathrm{sgn}t=-1}$ for any transposition we see that ${\mathrm{sgn}\sigma=(-1)^{m}}$ if ${\sigma=\prod_{l=1}^{m}t_{l}}$ with ${t_{l}}$ transpositions. In particular, although it may be possible to write ${\sigma}$ as a product of transpositions in a number of ways, the parity of the number of transpositions required is always the same, being even if ${\mathrm{sgn}\sigma=1}$ and odd if ${\mathrm{sgn}\sigma=-1}$. For this reason a permutation with ${\mathrm{sgn}\sigma=-1}$ is called odd and with ${\mathrm{sgn}\sigma=1}$ is called even.

Alternating forms

A ${j}$-linear form on ${\mathbb{R}^{d}}$ is a function ${f:\left(\mathbb{R}^{d}\right)^{j}\rightarrow\mathbb{R}}$ such that for each ${i=1,\ldots,j}$ given fixed vectors ${\mathbf{x}_{1},\ldots,\mathbf{x}_{i-1},\mathbf{x}_{i+1},\ldots,\mathbf{x}_{j}\in\mathbb{R}^{d}}$ we have

$\displaystyle \begin{array}{rcl} f(\mathbf{x}_{1},\ldots,\mathbf{x}_{i-1},\mathbf{x}+\alpha\mathbf{y},\mathbf{x}_{i+1},\ldots,\mathbf{x}_{j}) & = & f(\mathbf{x}_{1},\ldots,\mathbf{x}_{i-1},\mathbf{x},\mathbf{x}_{i+1},\ldots,\mathbf{x}_{j})\\ & & +\alpha f(\mathbf{x}_{1},\ldots,\mathbf{x}_{i-1},\mathbf{y},\mathbf{x}_{i+1},\ldots,\mathbf{x}_{j}) \end{array}$

for all ${\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}}$. That is ${f(\mathbf{x}_{1},\ldots,\mathbf{x}_{j})}$ is separately linear in each of the vectors ${\mathbf{x}_{1},\ldots,\mathbf{x}_{j}}$. An alternating j-form is a ${j}$-linear form with the property that ${f(\mathbf{x}_{1},\ldots,\mathbf{x}_{j})=0}$ if ${\mathbf{x}_{i}=\mathbf{x}_{i'}}$ for any pair ${i\neq i'}$, ${i,i'\in\left\{ 1,\ldots,j\right\} }$.

Given a permuation ${\sigma\in\mathcal{P}_{j}}$ we define

$\displaystyle \sigma f(\mathbf{x}_{1},\ldots,\mathbf{x}_{j}):=f(\mathbf{x}_{\sigma(1)},\ldots,\mathbf{x}_{\sigma(j)})$

for any ${j}$-linear form ${f}$.

Theorem 2 Let ${f}$ be an alternating ${j}$-form on ${\mathbb{R}^{d}}$ and let ${\sigma\in\mathcal{P}_{j}}$ then

$\displaystyle \sigma f=\left(\mathrm{sgn}\sigma\right)f.$

Proof: Factorizing ${\sigma}$ into a product of transpositions, we see that it suffices to prove this for a transposition. Fix ${i_{1}\neq i_{2}}$ with ${i_{1},i_{2}\in\left\{ 1,\ldots,j\right\} }$ and wlog take ${i_{1}. We must show that

$\displaystyle f+t_{i_{1},i_{2}}f=0. \ \ \ \ \ (1)$

To this end, let us fix vectors ${\mathbf{x}_{i}\in\mathbb{R}^{d}}$ for ${i\neq i_{1},\ i_{2}}$ and consider the bi-linear form ${w(\mathbf{x},\mathbf{y})}$ obtained by setting ${\mathbf{x}_{i_{1}}=\mathbf{x}}$ and ${\mathbf{x}_{i_{2}}=\mathbf{y}}$ and plugging ${(\mathbf{x}_{1},\ldots,\mathbf{x}_{j})}$ into ${f}$, namely

$\displaystyle w(\mathbf{x},\mathbf{y})=f(\mathbf{x}_{1},\ldots,\mathbf{x}_{i_{1}-1},\mathbf{x},\mathbf{x}_{i_{1}+1},\ldots,\mathbf{x}_{i_{2}-1},\mathbf{y},\mathbf{x}_{i_{2}+1},\ldots,\mathbf{x}_{j}).$

It is easy to see that ${w}$ is itself an alternating form, so

$\displaystyle 0=w(\mathbf{x}+\mathbf{y},\mathbf{x}+\mathbf{y})=w(\mathbf{x},\mathbf{x})+w(\mathbf{x},\mathbf{y})+w(\mathbf{y},\mathbf{x})+w(\mathbf{y},\mathbf{y})=w(\mathbf{x},\mathbf{y})+w(\mathbf{y},\mathbf{x}).$

In terms of ${f}$ this is (1). $\Box$

The space of alternating ${j}$-forms on ${\mathbb{R}^{d}}$ is a vector space. Let ${\mathbf{v}_{1},\ldots,\mathbf{v}_{d}}$ be any basis for ${\mathbb{R}^{d}}$. By linearity in each of the factors, an alternating ${j}$-form ${f}$ is uniquely determined by the values

$\displaystyle f(\mathbf{v}_{i_{1}},\ldots,\mathbf{v}_{i_{j}}) \ \ \ \ \ (2)$

where ${(i_{1},\ldots,i_{j})}$ range over all ${j}$-tuples of distinct numbers in ${\left\{ 1,\ldots,d\right\} }$. Furthermore, since

$\displaystyle f(\mathbf{v}_{i_{1}},\ldots,\mathbf{v}_{i_{j}})=\mathrm{sgn}\sigma f(\mathbf{v}_{i_{\sigma(1)}},\ldots,\mathbf{v}_{i_{\sigma(j)}})$

for any permutation of ${\left\{ 1,\ldots,j\right\} }$ we see that ${f}$ is uniquely determined by the values (2) where ${1\le i_{1}. This will allow us to show the following

Theorem 3 Let ${\Lambda^{j}\left(\mathbb{R}^{d}\right)}$ denote the space of alternating ${j}$-forms on ${\mathbb{R}^{d}}$ with ${j\ge1}$. Then ${\dim\Lambda^{j}\left(\mathbb{R}^{d}\right)={d \choose j}}$ if ${j\le d}$ and ${=0}$ if ${j>d}$. In particular, for ${j>d}$ the only alternating ${j}$-form is the zero form.

Remark 4 ${{d \choose j}=\frac{d!}{j!(d-j)!}}$ is the binomial coefficient. Recall that ${{d \choose j}}$ counts the number of subsets of ${\left\{ 1,\ldots,d\right\} }$ with ${j}$-elements.

Proof: Let ${D={d \choose j}}$. Let ${\mathbf{v}_{1},\ldots,\mathbf{v}_{d}}$ be a basis of ${\mathbb{R}^{d}}$. For each subset ${S\subset\left\{ 1,\ldots,d\right\} }$ of size ${j}$ let ${\mathbf{v}_{S}=(\mathbf{v}_{i_{1}},\ldots,\mathbf{v}_{i_{j}})}$ ${ }$where ${S=\left\{ i_{1},\ldots,i_{j}\right\} }$ with the elements written in increasing order ${i_{1}<\ldots. The discussion above the theorem shows that an alternating ${j}$-form ${f}$ is uniquely specified by the values ${f(\mathbf{v}_{S})}$ where ${S}$ ranges over the ${D}$ subsets of ${\left\{ 1,\ldots,d\right\} }$ of size ${j}$. Suppose that we have ${D+1}$ alternating ${j}$-forms, ${f_{1},\ldots,f_{D+1}}$. By basic linear algebra it is possible to find numbers ${\alpha_{l}\in\mathbb{R}}$ for ${l=1,\ldots,D+1}$ such that

$\displaystyle \sum_{l=1}^{D+1}\alpha_{l}f_{l}(\mathbf{v}_{S})=0$

for all subsets ${S\subset\left\{ 1,\ldots,d\right\} }$ of size ${j}$. Indeed, if we list the collection of all such subsets in some order ${S_{1},\ldots,S_{D}}$ this amounts to solving the matrix equation

$\displaystyle 0=\left(\begin{array}{ccccc} f_{1}(S_{1}) & f_{2}(S_{1}) & \cdots & \cdots & f_{D+1}(S_{1})\\ f_{1}(S_{2}) & f_{2}(S_{2}) & \cdots & \cdots & f_{D+1}(S_{2})\\ \vdots & & \ddots & & \vdots\\ \vdots & & & \ddots & \vdots\\ f_{1}(S_{D}) & f_{2}(S_{D}) & \cdots & \cdots & f_{D+1}(S_{D}) \end{array}\right)\left(\begin{array}{c} \alpha_{1}\\ \alpha_{2}\\ \vdots\\ \vdots\\ \alpha_{D+1} \end{array}\right).$

Since the matrix is of size ${D\times D+1}$ its maximal rank is ${D}$ and there must be a non-zero element of the kernel. It follows that ${\sum_{l=1}^{D+1}\alpha_{l}f_{l}=0}$. That is any collection of elements of ${\Lambda^{j}\left(\mathbb{R}^{d}\right)}$ of size ${D+1}$ or larger is linearly dependent. Thus ${\dim\Lambda^{j}\left(\mathbb{R}^{d}\right)\le D.}$

To complete the proof it is sufficient to exhibit a collection ${f_{1},\ldots,f_{D}}$ of alternating ${j}$-forms that is linearly independent. For each set ${S\subset\left\{ 1,\ldots,d\right\} }$ of size ${j}$ let

$\displaystyle f_{S}\left(\mathbf{v}_{S'}\right)=\begin{cases} 1 & \text{ if }S=S'\\ 0 & \text{ if }S\neq S' \end{cases}. \ \ \ \ \ (3)$

This determines an alternating ${j}$-form and the collection of such ${j}$-forms is linearly independent since the only way that ${\sum_{l}\alpha_{l}f_{S_{l}}}$ vanishes on ${\mathbf{v}_{S_{l_{0}}}}$ is if ${\alpha_{l_{0}}=0}$. $\Box$

Note that the proof gave us an explicit basis for ${\Lambda^{j}\left(\mathbb{R}^{d}\right)}$ namely ${\left\{ f_{S}\ :\ S\subset\left\{ 1,\ldots,d\right\} \text{ has size }j\right\} .}$ Note that

$\displaystyle f_{S}(\mathbf{v}_{l_{1}},\ldots,\mathbf{v}_{l_{j}})=\begin{cases} 0 & \text{ if }\left\{ l_{1},\ldots,l_{j}\right\} \neq S\\ \mathrm{sgn}\sigma & \text{ if }\left\{ l_{1},\ldots,l_{j}\right\} =S \end{cases}$

where ${\sigma}$ is the unique permuation such that if ${i_{1},\ldots,i_{j}}$ are the elements of ${S}$ written in increasing order then

$\displaystyle l_{m}=i_{\sigma(m)},\quad m=1,\ldots,j.$

By multilinearity this serves to define ${f_{S}}$ on all ${j}$-tuples ${\left(\mathbf{x}_{1},\ldots,\mathbf{x}_{j}\right).}$

Corollary 5 The space ${\Lambda^{d}\left(\mathbb{R}^{d}\right)}$ of alternating ${d}$-forms on ${\mathbb{R}^{d}}$ is one dimensional.

What should we take as a definition of the space of alternating ${0}$-forms? Plugging ${j=0}$ into the formula ${\dim\Lambda^{j}\left(\mathbb{R}^{d}\right)={d \choose j}}$ shows that the natural definition is

$\displaystyle \Lambda^{0}\left(\mathbb{R}^{d}\right):=\mathbb{R}.$

Linear transformations and determinants

Let ${A\in L(\mathbb{R}^{d})}$ be a linear transformation on ${\mathbb{R}^{d}}$. There is a natural way to let ${A}$ “act” on ${j}$-forms, namely

$\displaystyle [A]_{j}f\left(\mathbf{v}_{1},\ldots,\mbox{v}_{j}\right)=f(A\mathbf{v}_{1},\ldots,A\mathbf{v}_{j}).$

It is easy to see that ${[A]_{j}f}$ is alternating if ${f}$ is, so we can think of ${[A]_{j}}$ as a map from ${\Lambda^{j}\left(\mathbb{R}^{d}\right)}$ to itself. This map is easily seen to be linear:

$\displaystyle \left[[A]_{j}\left(f+\alpha g\right)\right]\left(\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right)=\left[f+\alpha g\right]\left(A\mathbf{v}_{1},\ldots,A\mathbf{v}_{j}\right)=f\left(A\mathbf{v}_{1},\ldots,A\mathbf{v}_{j}\right)+\alpha g\left(A\mathbf{v}_{1},\ldots,A\mathbf{v}_{j}\right),$

etc. Thus ${[A]_{j}\in L\left(\Lambda^{j}\left(\mathbb{R}^{k}\right)\right)}$. It is also clear from the definition that

$\displaystyle \left[A\right]_{j}\left[B\right]_{j}=\left[AB\right]_{j}. \ \ \ \ \ (4)$

On the space of ${0}$-forms we take ${\left[A\right]_{0}=1}$.

Think about ${\left[A\right]_{d}}$ for a moment. This is a linear transformation on ${\Lambda^{d}\left(\mathbb{R}^{d}\right)}$ which is a one-dimensional space. Thus there is a number, which we will call the determinant of ${A}$ and denote by ${\det A}$, such that

$\displaystyle f\left(A\mathbf{v}_{1},\ldots,A\mathbf{v}_{d}\right)=\det Af\left(\mathbf{v}_{1},\ldots,\mathbf{v}_{d}\right) \ \ \ \ \ (5)$

for any alternating ${d}$-form ${f}$. We take this as the definition of the determinant. Note that the formula

$\displaystyle \det AB=\det A\det B$

is just (4) specialized to ${j=d}$.

Theorem 6 Let ${f\in\Lambda^{j}\left(\mathbb{R}^{d}\right)}$ with ${1\le j\le d}$. If ${\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\in\mathbb{R}^{d}}$ is a linearly dependent collection of vectors then

$\displaystyle f\left(\mathbf{v}_{1},\ldots,\mathbf{v}_{j}\right)=0.$

Proof: Suppose ${\sum_{i}\alpha_{i}\mathbf{v}_{i}=0}$ with ${\alpha_{i}\neq0}$ for some ${i}$. Without loss of generality, suppose that ${\alpha_{1}\neq0}$. Then

$\displaystyle 0=f\left(\sum_{i}\alpha_{i}\mathbf{v}_{i},\mathbf{v}_{2},\ldots,\mathbf{v}_{j}\right)=\sum_{i}\alpha_{i}f\left(\mathbf{v}_{i},\mathbf{v}_{2},\ldots,\mathbf{v}_{j}\right)=\alpha_{1}f(\mathbf{v}_{1},\ldots,\mathbf{v}_{j})$

since ${f(\mathbf{v}_{i},\mathbf{v}_{2},\ldots,\mathbf{v}_{j})=0}$ for ${i\ge2}$ because the vector ${\mathbf{v}_{i}}$ appears twice. Since ${\alpha_{1}\neq0}$ the result holds.$\Box$

Corollary 7 Let ${A\in L\left(\mathbb{R}^{d}\right)}$. Then ${A}$ is invertible if and only if ${\det A\neq0}$.

Proof: If ${A}$ is invertible then

$\displaystyle \det A\det A^{-1}=\det\mathrm{I}_{d}=1$

where ${\mathrm{I}_{d}}$ is the ${d\times d}$ identity matrix, which we see has determinant one by the definition (5). Thus ${\det A\neq0}$.

On the other hand if ${A}$ is not invertible then ${A\mathbf{v}_{1},\ldots,A\mathbf{v}_{d}}$ is linearly dependent for any set of ${d}$-vectors ${\mathbf{v}_{1},\ldots,\mathbf{v}_{d}}$. Thus

$\displaystyle f\left(A\mathbf{v}_{1},\ldots,A\mathbf{v}_{d}\right)=0$

and we see that ${\det A=0}$ from the definition (5). $\Box$

To compute a determinant, it is useful to relate ${\det A}$ to the usual expression for the determinant of the matrix of ${A}$ in a basis:

Theorem 8 Let ${A\in L\left(\mathbb{R}^{d}\right)}$ and let ${\mathbf{v}_{1},\ldots,\mathbf{v}_{d}}$ be a basis for ${\mathbb{R}^{kd}}$. Let ${\left(a_{i,j}\right)_{i,j=1}^{d}}$ be the matrix of ${A}$ in this basis, i.e., ${a_{i,j}}$ are the numbers defined by

$\displaystyle A\mathbf{v}_{j}=\sum_{i=1}^{d}a_{i,j}\mathbf{v}_{i}.$

Then

$\displaystyle \det A=\sum_{\sigma\in\mathcal{P}_{d}}\mathrm{sgn}\sigma\prod_{i=1}^{d}a_{\sigma(i),i}.$

Proof: Let ${f}$ be a non-zero alternating ${d}$-form. Then ${f\left(\mathbf{v}_{1},\ldots,\mathbf{v}_{d}\right)\neq0}$ and

$\displaystyle \det A=\frac{f(A\mathbf{v}_{1},\ldots,A\mathbf{v}_{d})}{f(\mathbf{v}_{1},\ldots,\mathbf{v}_{d})}.$

However,

$\displaystyle f(A\mathbf{v}_{1},\ldots,A\mathbf{v}_{d})=\sum_{l_{1},\ldots,l_{d}=1}^{d}\left(\prod_{i=1}^{d}a_{l_{i},i}\right)f\left(\mathbf{v}_{l_{1}},\ldots,\mathbf{v}_{l_{d}}\right).$

Here ${f(\mathbf{v}_{l_{1}},\ldots,\mathbf{v}_{l_{d}})=0}$ if any two entries are the same. Otherwise, if ${l_{1},\ldots,l_{d}}$ are ${d}$-distinct numbers, which is to say ${\left\{ l_{1},\ldots,l_{d}\right\} =\left\{ 1,\ldots,d\right\} }$, then

$\displaystyle f(\mathbf{v}_{l_{1}},\ldots,\mathbf{v}_{l_{d}})=\mathrm{sgn}\sigma f(\mathbf{v}_{1},\ldots,\mathbf{v}_{d})$

where ${\sigma}$ is the permuation

$\displaystyle \sigma(i)=l_{i}.$

The formula now follows. $\Box$

By a similar but more computationally involved proof which we won’t spell out here, we have the following

Theorem 9 Let ${A\in L(\mathbb{R}^{d})}$, ${\mathbf{v}_{1},\ldots,\mathbf{v}_{d}}$ and ${a_{i,j}}$ be as in the previous theorem and let ${\left\{ f_{S}\ :\ S\subset\left\{ 1,\ldots,d\right\} \text{ of size }j\right\} }$ be the basis of ${\Lambda^{j}\left(\mathbb{R}^{d}\right)}$ defined in (3). Fix a set ${S\subset\left\{ 1,\ldots,d\right\} }$ of size ${j}$ and let ${i_{1},\ldots,i_{j}}$ be its elements listed in increasing order. Then

$\displaystyle \left[A\right]_{j}f_{S}=\sum_{\sigma\in\mathcal{P}_{j}}\sum_{S'}\left(\mathrm{sgn}\sigma\prod_{l=1}^{j}a_{i_{\sigma(l)}',i_{l}}\right)f_{S'}$

where ${\sum_{S'}}$ runs over all subsets ${S'\subset\left\{ 1,\ldots,d\right\} }$ of size ${j}$, and ${i'_{1},\ldots,i_{j}'}$ denote the elements of ${S'}$ listed in increasing order.