The next topic we will consider is integration in many variables. In developing the theory we will need a bit of linear algebra. Specifically we will need the notion and basic results about *alternating linear forms.* One of the first things we can do with this is obtain some basic results about *determinants,* which you have probably seen before but which we will develop here for completeness.

A PDF version of this lecture is available here.

** Permutations **

Let be a positive integer and let . A *permuation *of is a one-to-one map from onto itself. The *sign* of a permuation is defined to be

where if and if . So for any permuation . Let denote the set of all permutations of . Given two permutations we denote their composition by . The set is a *group* under this operation. We have

Theorem 1Let , thenThat is the map is a homomorphism from onto the multiplicative group .

*Proof:* Hopefully you have seen this in your algebra course. If not, here is a proof. Note that where is the number of pairs with but . If and , there are two possibilities: or . Thus

Since the last term is even, we see that .

We will also need the fact that any permuation of can be expressed as a product of at most * *distinct *transpositions* (where the empty product is understood to be the identity permutation). A *transposition *is a permutation that interchanges two numbers. More precisely, given with the transposition is defined to be the permuation

One way to prove that any permutation can be written as a product of transpositions is by induction on . To begin, the only permutation of is the identity so the claim is trivial for . Second suppose the result is known to hold for permuations of and let be a permuation of . Let and consider the map . This permuation satisfies so we can think of as a permutation of . Thus there are transpositions of with such that . (We extend to act on by defining .) Since for any transposition, we see that

Since for any transposition we see that if with transpositions. In particular, although it may be possible to write as a product of transpositions in a number of ways, the *parity *of the number of transpositions required is always the same, being even if and odd if . For this reason a permutation with is called odd and with is called even.

** Alternating forms **

A *-linear form *on is a function such that for each given fixed vectors we have

for all . That is is separately linear in each of the vectors . An *alternating j-form *is a -linear form with the property that if for any pair , .

Given a permuation we define

for any -linear form .

Theorem 2Let be an alternating -form on and let then

*Proof:* Factorizing into a product of transpositions, we see that it suffices to prove this for a transposition. Fix with and wlog take . We must show that

To this end, let us fix vectors for and consider the bi-linear form obtained by setting and and plugging into , namely

It is easy to see that is itself an alternating form, so

In terms of this is (1).

The space of alternating -forms on is a vector space. Let be any basis for . By linearity in each of the factors, an alternating -form is uniquely determined by the values

where range over all -tuples of distinct numbers in . Furthermore, since

for any permutation of we see that is uniquely determined by the values (2) where . This will allow us to show the following

Theorem 3Let denote the space of alternating -forms on with . Then if and if . In particular, for the only alternating -form is the zero form.

Remark 4is the binomial coefficient. Recall that counts the number of subsets of with -elements.

*Proof:* Let . Let be a basis of . For each subset of size let where with the elements written in increasing order . The discussion above the theorem shows that an alternating -form is uniquely specified by the values where ranges over the subsets of of size . Suppose that we have alternating -forms, . By basic linear algebra it is possible to find numbers for such that

for all subsets of size . Indeed, if we list the collection of all such subsets in some order this amounts to solving the matrix equation

Since the matrix is of size its maximal rank is and there must be a non-zero element of the kernel. It follows that . That is any collection of elements of of size or larger is linearly dependent. Thus

To complete the proof it is sufficient to exhibit a collection of alternating -forms that is linearly independent. For each set of size let

This determines an alternating -form and the collection of such -forms is linearly independent since the only way that vanishes on is if .

Note that the proof gave us an explicit basis for namely Note that

where is the unique permuation such that if are the elements of written in increasing order then

By multilinearity this serves to define on all -tuples

Corollary 5The space of alternating -forms on is one dimensional.

What should we take as a definition of the space of alternating -forms? Plugging into the formula shows that the natural definition is

** Linear transformations and determinants **

Let be a linear transformation on . There is a natural way to let “act” on -forms, namely

It is easy to see that is alternating if is, so we can think of as a map from to itself. This map is easily seen to be linear:

etc. Thus . It is also clear from the definition that

On the space of -forms we take .

Think about for a moment. This is a linear transformation on which is a one-dimensional space. Thus there is *a number, which we will call the determinant of and denote by , such that *

* for any alternating -form .* We take this as the *definition* of the determinant. Note that the formula

is just (4) specialized to .

Theorem 6Let with . If is a linearly dependent collection of vectors then

*Proof:* Suppose with for some . Without loss of generality, suppose that . Then

since for because the vector appears twice. Since the result holds.

Corollary 7Let . Then is invertible if and only if .

*Proof:* If is invertible then

where is the identity matrix, which we see has determinant one by the definition (5). Thus .

On the other hand if is not invertible then is linearly dependent for any set of -vectors . Thus

and we see that from the definition (5).

To compute a determinant, it is useful to relate to the usual expression for the determinant of the matrix of in a basis:

Theorem 8Let and let be a basis for . Let be the matrix of in this basis, i.e., are the numbers defined byThen

*Proof:* Let be a non-zero alternating -form. Then and

However,

Here if any two entries are the same. Otherwise, if are -distinct numbers, which is to say , then

where is the permuation

The formula now follows.

By a similar but more computationally involved proof which we won’t spell out here, we have the following

Theorem 9Let , and be as in the previous theorem and let be the basis of defined in (3). Fix a set of size and let be its elements listed in increasing order. Thenwhere runs over all subsets of size , and denote the elements of listed in increasing order.