|numerical quantum mechanics in one-dimension
||[Feb. 14th, 2009|02:43 pm]
Tobin Fricke's Lab Notebook
Open any introductory quantum mechanics textbook and you'll find an encyclopedia of the analytic and algebraic tooling useful in solving quantum mechanical problems. But why not numerical methods? I think ever since I first took introductory quantum mechanics in college, I've wanted to be able to solve for the energy eigenstates of a system numerically—this goes back to something I wrote earlier, that I don't feel I understand a problem until I can program a computer to solve it.
Anyway, in a burst of structured procrastination, I hammered out a short Matlab program to do just this for simple one-dimensional problems. It's great—type in an arbitrary potential, and a few seconds later you can see the eigenstates and eigenvalues.
The "time independent Schrödinger equation" is simply the idea that there exists an operator
H whose eigenstates are states of definite energy, and whose eigenvalues are those energies:
H |ψ> = E |ψ>
where H is an operator and
E is a number.
|ψ> is just a way of writing a vector; we might as well write it
ψ(x), the latter notation making it clear that
ψ is a function over a continuous coordinate
x (making it an infinite-dimensional vector).
The basic problem is to find a set of eigenvalues
En and eigenstates
ψn(x) given the operator
H, which is called the Hamiltonian.
The Hamiltonian (for a one-dimensional problem) typically has the form:
H = p2/(2m) + V(x)
where p is an operator that gives the momentum (making
p^2/2m the kinetic energy), and V is an operator that gives the potential energy.
It turns out that
p = -i ℏ (∂/∂x)
where ℏ is some physical constant, and (∂/∂x) is the derivative with respect to x.
From linear algebra we know that there is a one-to-one mapping between matrices and operators. To solve these sort of problems numerically, we just have to discretize everything and write our operators as matrices.
The main ingredient we need is a discrete derivative. This is not so hard: we can just estimate the derivative by computing the difference between two points and dividing by Δx.
f'(x) = lim (h → 0) of (f(x+h) - f(x)) / h
f'(x) ≈ (f(x+Δx) - f(x))/(Δx)
Now if you imagine
f(x) as a column vector over our discretized values of x,
f = [f(x1) f(x2) f(x3) ... f(xN)]T
then it's clear that the derivative operator will look something like:
D = [1 -1 0 0 ...
0 1 -1 0 ...
0 0 1 -1 ... ]/Δxwhere
Δx = xN+1-xN.
It turns out that this almost works, but we really need the derivative operator to be symmetric. I'm not sure exactly why, but the practical problem is that
DN gets shifted farther and farther to the right as we raise N with this asymmetric version. A better form is to use a symmetric difference:
f'(x) = lim (h → 0) of (f(x+h) - f(x-h)) / (2h)
f'(x) ≈ (f(x+Δx) - f(x-Δx))/(2 Δx)
That does work, but unfortunately it doesn't give a very good form of D2 for the second derivative, which is what we need. I'll just punt and define the second derivative estimator directly:
f''(x) = lim (h → 0) of (f(x+h) - 2 f(x) + f(x-h)) / h2
f'(x) ≈ (f(x+Δx) - 2 f(x) + f(x-Δx))/(Δx)2
which, in matrix form, looks like
D2 = [ 2 -1 0 0
-1 2 -1 0
0 -1 2 -1
0 0 -1 2 ]/(Δx)2which is symmetric, as desired.
(I'm not sure if there's a good form of the first derivative operator D that will give a nice form of the second derivative D2. Perhaps having D and D2 is enough to compute nice versions of all higher derivatives.)
One of the simplest problems in one-dimensional quantum mechanics is the "square well", where the potential energy is zero inside of a certain range and non-zero (or infinite) outside:
V(x) = 0 if |x| < L
V0 otherwiseNow we have everything we need to put together
H = -D2 + V (ignoring all physical constants). Then we solve for the eigenfunctions and eigenvalues of H, as described in my previous entry.
Without further ado, we find (numerically!) the usual eigenfunctions:
And the eigenvalues scale like n2, as expected:
Anyway, try playing with the complete matlab program. It's fun! Just put in a potential, run the program, and watch the eigenfunctions converge...
- I assume this would be totally impractical with problems of greater than 1-dimension, though of course for separable problems, you can solve for each separable part individually.
- There's no reason, necessarily, to do this all in the x basis. We could instead solve for the fourier or laguerre series of the eigenfunction, which might converge faster and/or have a nicer form for the derivative.