Lectures Week 2 — MTH 496

In this lecture we introduce the notion of a state in classical mechanics and the evolution equation of states, namely Liouville’s equation. Throughout we will denote phase space by ${\mathcal{M}=\mathbb{R}^{n}\times\mathbb{R}^{n}}$ and will use the letter ${\mu}$ to denote a point ${\left(\mathbf{Q},\mathbf{P}\right)}$ in phase space. To begin we introduce a compact form of Hamilton’s equations.

A compact form of Hamilton’s equations.

We may write Hamilton’s equations in the following compact form:

$\displaystyle \frac{d\mu_{t}}{dt}=\mathcal{J}\nabla H\left(\mu\right) \ \ \ \ \ (1)$

where

1. ${\mu_{t}\in\mathcal{M}=\mathbb{R}^{n}\times\mathbb{R}^{n}}$ is a point in phase space;
2. ${H}$ is the Hamiltonian,

$\displaystyle \nabla H=\begin{pmatrix}\frac{\partial H}{\partial\mu_{1}}\\ \frac{\partial H}{\partial\mu_{2}}\\ \vdots\\ \frac{\partial H}{\partial\mu_{2n}} \end{pmatrix}$

3. ${\mathcal{J}}$ is the matrix

$\displaystyle \mathcal{J}=\begin{pmatrix}0_{n\times n} & I_{n\times n}\\ -I_{n\times n} & 0_{n\times n} \end{pmatrix}.$

Associated to the matrix ${\mathcal{J}}$ is the symplectic form

$\displaystyle \eta(\mathbf{h},\mathbf{k})=\mathbf{h}\cdot\mathcal{J}\mathbf{k},\quad\mathbf{h},\mathbf{k}\in\mathcal{M}.$

In the previous lecture we derived the evolution equation

$\displaystyle \frac{\partial U_{t}f}{\partial t}=\left\{ H,U_{t}f\right\} \ \ \ \ \ (2)$

where ${H}$ is the Hamiltonian and ${\left\{ f,g\right\} }$ is the Poisson bracket. In terms of the symplectic form ${\eta}$ we have

$\displaystyle \left\{ f,g\right\} =-\eta(\nabla f,\nabla g).$

The Poisson bracket has several properties:

1. Linearity: ${\left\{ f,ag+bh\right\} =a\left\{ f,g\right\} +b\left\{ f,h\right\} }$ for ${f,g,h\in\mathcal{A}}$ and ${a,b\in\mathbb{R}}$.
2. Skew symmetry: ${\left\{ f,g\right\} =-\left\{ g,f\right\} }$.
3. Leibniz rule: ${\left\{ f,gh\right\} =g\left\{ f,h\right\} +\left\{ f,g\right\} h}$.
4. Jacobi identity: ${\left\{ f,\left\{ g,h\right\} \right\} +\left\{ g,\left\{ h,f\right\} \right\} +\left\{ h,\left\{ f,g\right\} \right\} =0}$.

Linearity and skew-symmetry are elementary. The Leibniz rule follows from the corresponding fact about differentiation and the observation that

$\displaystyle \left\{ f,gh\right\} =X_{f}(gh)$

where ${X_{f}}$ is the first order differential operator

$\displaystyle X_{f}=\eta\left(\nabla f,\nabla\right)=-\nabla f\cdot\mathcal{J}\nabla.$

The Jacobi identity follows from the equality of second order partial derivatives and can be verified by a somewhat involved but direct computation.

An important observation is that the evolution ${U_{t}}$ preserves the Poisson bracket,

$\displaystyle \left\{ U_{t}f,U_{t}g\right\} =U_{t}\left(\left\{ f,g\right\} \right). \ \ \ \ \ (3)$

To verify (3) let ${h_{t}=\left\{ U_{t}f,U_{t}g\right\} }$ and compute

$\displaystyle \frac{\partial h_{t}}{\partial t}=\left\{ \left\{ H,U_{t}f\right\} ,U_{t}g\right\} +\left\{ U_{t}f,\left\{ H,U_{t}g\right\} \right\} =\left\{ H,\left\{ U_{t}f,U_{t}g\right\} \right\} =\left\{ H,h_{t}\right\}$

by skey-symmetry and the Leibniz rule. Thus ${h_{t}}$ satisfies the evolution equation for an observable! However ${h_{0}=\left\{ f,g\right\} }$ so (3) follows. The evolution ${U_{t}}$ preserves all the other algebraic structures of the observable algebra:

$\displaystyle U_{t}(af+bg)=aU_{t}f+bU_{t}g,\quad\text{and }U_{t}\left(fg\right)=U_{t}f\ U_{t}g.$

States and mean values

The state of a classical system is specified by a point ${\mu\in\mathcal{M}}$. It is useful to introduce a notion of state that allows for some uncertainty as to the specific point in phase space. Imagine that we measure a system, but that our measurements are subject to some errors. We shall think of this as resulting in a probability density ${\rho\left(\mu\right)}$ on phase space such that for any ${E\subset\mathcal{M}}$

$\displaystyle \int_{E}\rho\left(\mu\right)d\mu=\text{ Probability that \ensuremath{\left(\mu\right)\in E}.}$

For example if we measure the system to be in a configuration ${\left(\mu_{0}\right)}$ subject to some errors it might be natural to take

$\displaystyle \rho\left(\mu\right)\propto e^{-\sigma\left|\mu-\mu_{0}\right|^{2}}.$

In statistical physics one considers other distributions, such as the Gibbs distribution ${\rho\propto e^{-\beta H}}$ where ${H}$ is the Hamiltonian and ${\beta=\frac{1}{k_{B}T}}$ with ${T}$ the temperature and ${k_{B}=}$ Boltzman’s constant.

More generally we may think of a state as a probability measure ${\omega}$ on phase space, which is a map from a suitable colleciton of subsets phase space (including all the open sets) with the properties

$\displaystyle \omega\left(\bigcup_{j=1}^{\infty}E_{j}\right)=\sum_{j=1}^{\infty}\omega\left(E_{j}\right)\quad\text{ if }E_{i}\cap E_{j}=\emptyset,$

$\displaystyle \omega\left(\emptyset\right)=0,\quad\text{and }\quad\omega\left(\mathcal{M}\right)=1.$

The collection of such measures is quite large and includes many measures that are quite unusual with respect to classical analysis. We need not trouble ourselves with this broad class of measures, we are mostly interested in states of the form

$\displaystyle \omega\left(E\right)=\int_{E}\rho_{\omega}\left(\mu\right)d\mu \ \ \ \ \ (4)$

where ${\rho_{\omega}}$ is a nice function on phase space and in the pure states, of the form,

$\displaystyle \omega_{\mu_{0}}\left(E\right)=\begin{cases} 1 & \text{ if }\left(\mu_{0}\right)\in E,\\ 0 & \text{ if }\left(\mu_{0}\right)\not\in E. \end{cases} \ \ \ \ \ (5)$

(Formally, a pure state is of the form (4) with ${\rho_{\omega}\left(\mu\right)=\delta\left(\mu-\mu_{0}\right)}$, where ${\delta}$ is the mythical Dirac delta function.)

Given a state ${\omega}$ and an observable ${f}$, the distribution of ${f}$ in ${\omega}$ is the probability distribution ${\omega_{f}}$ on the real line given by

$\displaystyle \omega_{f}\left(E\right)=\omega\left(\left\{ \mu\middle|f\left(\mu\right)\in E\right\} \right)=\int_{f^{-1}\left(E\right)}\rho\left(\mu\right)d\mu$

for suitable ${E\subset\mathbb{R}}$. The measure ${\omega_{f}}$ is determined by its cummulative distribution function

$\displaystyle W_{f}\left(\lambda\right)=\omega_{f}\left((-\infty,\lambda)\right).$

In particular the mean value of ${f}$ is the integral

$\displaystyle \left\langle f\middle|\omega\right\rangle =\int_{-\infty}^{\infty}f(\lambda)dW_{f}\left(\lambda\right),$

where the integral may be defined as a Riemann-Stieltjes integral. For states of the form (4) we have

$\displaystyle \left\langle f\middle|\omega\right\rangle =\int_{\mathcal{M}}f\left(\mu\right)\rho_{\omega}\left(\mu\right)d\mu$

and for pure states we have ${\left\langle f\middle|\omega_{\mu_{0}}\right\rangle =f(\mu_{0})}$. (More generally,

$\displaystyle \left\langle f\middle|\omega\right\rangle =\int_{\mathcal{M}}fd\omega \ \ \ \ \ (6)$

where the integral (6) is defined in the sense of Lebesgue.) Note that the mean value is a linear function of the observable.

Liouville’s Theorem

We begin with the following

Lemma 1 Let ${G_{t}}$ be the evolution map associated to a Hamiltonian. For each ${\mu\in\mathcal{M}}$ and each ${t\ge0}$

$\displaystyle \det dG_{t}\left(\mu\right)=1$

where ${dG_{t}}$ is the derivative of ${G_{t}}$.

\begin{rem*} Recall that the derivative ${dG_{t}\left(\mu\right)}$ is the linear transformation on ${\mathcal{M}}$ defined by

$\displaystyle dG_{t}\left(\mu\right)\cdot\mathbf{h}=\left.\frac{d}{ds}G_{t}\left(\mu+s\mathbf{h}\right)\right|_{s=0}.$

\end{rem*} To prove the Lemma we will need some facts about determinants. Recall for an ${n\times n}$ matrix ${A}$

$\displaystyle \det\left(I+\epsilon A\right)=1+\epsilon\mathrm{tr}A+O(\epsilon^{2}) \ \ \ \ \ (7)$

where ${I}$ denotes the identity matrix and ${\mathrm{tr}}$ the trace, ${\mathrm{tr}A=\sum_{i}A_{i,i}}$. Based on this we have

Proposition 2 Let ${t\mapsto A_{t}}$ be a map from an interval in the real line into the space ${n\times n}$ matrices. If for some ${t}$, ${A_{t}}$ is invertible and ${\frac{dA_{t}}{dt}}$ exists then

$\displaystyle \frac{d}{dt}\det A_{t}=\det A_{t}\mathrm{tr}A_{t}^{-1}\frac{dA_{t}}{dt}.$

Proof: Note that

$\displaystyle \frac{\det A_{t+h}-\det A_{t}}{h}=\det A_{t}\frac{\det\left(I+A_{t}^{-1}\left(A_{t+h}-A_{t}\right)\right)-1}{h},$

where we have used the identity ${\det AB=\det A\det B}$. The proposition follows from (7) and the assumption that ${A}$ is differentiable at ${t}$. $\Box$

Returning to the Lemma we have: Proof: Let ${D_{t}\left(\mu\right)=\det dG_{t}\left(\mu\right)}$. By the proposition we have

$\displaystyle \frac{d}{dt}D_{t}\left(\mu\right)=D_{t}\left(\mu\right)\mathrm{tr}dG_{t}\left(\mu\right)^{-1}\frac{d}{dt}dG_{t}\left(\mu\right).$

Putting ${t=0}$ we have

$\displaystyle \left.\frac{d}{dt}D_{t}\left(\mu\right)\right|_{t=0}=\mathrm{tr}\left.\frac{d}{dt}dG_{t}\left(\mu\right)\right|_{t=0}. \ \ \ \ \ (8)$

Now, by equality of second order partials

$\displaystyle \frac{d}{dt}dG_{t}\left(\mu\right)=d\frac{dG_{t}}{dt}\left(\mu\right),$

so by (1) we have

$\displaystyle \frac{d}{dt}dG_{t}\left(\mu\right)=\mathcal{J}\left(\frac{\partial^{2}H}{\partial\mu_{i}\partial\mu_{j}}\left(\mu\right)\right)_{i,j=1}^{2n},$

where we have abused notation by identifying a linear transformation and it’s matrix. Returning to (8) we find

$\displaystyle \left.\frac{d}{dt}D_{t}\left(\mu\right)\right|_{t=0}=\sum_{i=1}^{n}\frac{\partial^{2}H\left(\mu\right)}{\partial\mu_{n+i}\partial\mu{}_{i}}-\frac{\partial^{2}H\left(\mu\right)}{\partial\mu_{i}\partial\mu_{n+i}}=0.$

For ${t\neq0}$ we have

$\displaystyle D_{t+s}\left(\mu\right)=D_{s}\left(G_{t}\mu\right)D_{t}\left(\mu\right)$

by the Group property for ${G_{t}}$. Differentiating with respect to ${s}$ at ${s=0}$ gives the result.$\Box$

Thm 3 (Liouville’s Theorem) Let ${\Omega\subset\mathcal{M}}$ be a domain with finite volume. Let ${\Omega(t)=G_{t}(\Omega)=}$ image of ${\Omega}$ under a Hamiltonian flow. Then

$\displaystyle \mathrm{Vol}\Omega(t)=\mathrm{Vol}\Omega.$

Proof: This follows from the identity

$\displaystyle \mathrm{Vol}\Omega(t)=\int_{\Omega(t)}d\mu=\int_{\Omega}\left|\det dG_{t}\left(\mu\right)\right|d\mu.$

$\Box$

Evolution of states and Liouville’s equation.

We started off with the flow equation on phase space, which can be seen as an evolution map on pure states. From this we derived the equation of motion for observables via the identification ${U_{t}f(\mu)=f(G_{t}\left(\mu\right))}$. We can think of this, instead, as a map on states where we define

$\displaystyle \left\langle f\middle|L_{t}\omega\right\rangle :=\left\langle U_{t}f\middle|\omega\right\rangle .$

What happens if ${\omega=\omega_{\mu_{0}}}$ is a pure state? Then we have

$\displaystyle \left\langle f\middle|L_{t}\omega_{\mu_{0}}\right\rangle =\left\langle U_{t}f\middle|\omega_{\mu_{0}}\right\rangle =U_{t}f(\mu_{0})=f(G_{t}\mu_{0})=\left\langle f\middle|\omega_{G_{t}\left(\mu_{0}\right)}\right\rangle . \ \ \ \ \ (9)$

That is, the evolution of ${L_{t}\omega_{\mu_{0}}}$ is the pure state ${\omega_{G_{t}\left(\mu_{0}\right)}}$ as we should expect. What about a more general state of the form (4)? Then we have

$\displaystyle \left\langle f\middle|L_{t}\omega\right\rangle =\int_{\mathcal{M}}f\left(G_{t}\left(\mu\right)\right)\rho_{\omega}(\mu)d\mu=\int_{\mathcal{M}}f\left(\mu\right)\rho_{\omega}\left(G_{-t}\left(\mu\right)\right)d\mu$

where we have used Lemma 1. We conclude that

$\displaystyle \rho_{L_{t}\omega}=\rho\circ G_{-t}. \ \ \ \ \ (10)$

(Note that (9) and (10) suggest the formal identity

$\displaystyle \delta\left(\mu-G_{t}\left(\mu_{0}\right)\right)=\delta\left(G_{-t}\left(\mu\right)-\mu_{0}\right).$

Play with this and understand why it must be true.)

Thus we may introduce the following more general evolution equation

$\displaystyle \frac{d\rho_{t}}{dt}=-\left\{ H,\rho_{t}\right\} \ \ \ \ \ (11)$

for the evolution of the state of the system ${\rho_{t}}$ at time ${t}$. This equation is known as Liouville’s equation and it has solutions ${\rho_{t}=\rho_{0}\circ G_{-t}}$ as in (10).

Two pictures

We have introduced two pictures of the classical evolution

1. The state, or Liouville, picture, in which the state of the system evolves according to (11) and observables remain constant. The mean value of an observable ${f}$ at time ${t}$ is given by ${\int_{\mathcal{M}}f\left(\mu\right)\rho_{t}\left(\mu\right)d\mu.}$
2. The observable picture in which the state of the system remains constant and observables evolve according to (2). The mean value of an observable ${f}$ at time ${t}$ is given by ${\int_{\mathcal{M}}U_{t}f\left(\mu\right)\rho(\mu)d\mu}$.

These two pictures will stay with us when we move to Quantum mechanics. The first is known as the Schroedinger picture and the second as the Heisenberg picture.