Open any introductory quantum mechanics textbook and you'll find an encyclopedia of the analytic and algebraic tooling useful in solving quantum mechanical problems. But why not numerical methods? I think ever since I first took introductory quantum mechanics in college, I've wanted to be able to solve for the energy eigenstates of a system numerically—this goes back to something I wrote earlier, that I don't feel I understand a problem until I can program a computer to solve it.
Anyway, in a burst of structured procrastination, I hammered out a short Matlab program to do just this for simple onedimensional problems. It's great—type in an arbitrary potential, and a few seconds later you can see the eigenstates and eigenvalues.
The "time independent Schrödinger equation" is simply the idea that there exists an operator H whose eigenstates are states of definite energy, and whose eigenvalues are those energies:
H ψ> = E ψ>
where H is an operator and E is a number. ψ> is just a way of writing a vector; we might as well write it v or ψ(x) , the latter notation making it clear that ψ is a function over a continuous coordinate x (making it an infinitedimensional vector).
The basic problem is to find a set of eigenvalues E_{n} and eigenstates ψ_{n}(x) given the operator H , which is called the Hamiltonian.
The Hamiltonian (for a onedimensional problem) typically has the form:
H = p^{2}/(2m) + V(x)
where p is an operator that gives the momentum (making p^2/2m the kinetic energy), and V is an operator that gives the potential energy.
It turns out that
p = i ℏ (∂/∂x)
where ℏ is some physical constant, and (∂/∂x) is the derivative with respect to x.
From linear algebra we know that there is a onetoone mapping between matrices and operators. To solve these sort of problems numerically, we just have to discretize everything and write our operators as matrices.
The main ingredient we need is a discrete derivative. This is not so hard: we can just estimate the derivative by computing the difference between two points and dividing by Δx.
f'(x) = lim (h → 0) of (f(x+h)  f(x)) / h f'(x) ≈ (f(x+Δx)  f(x))/(Δx)
Now if you imagine f(x) as a column vector over our discretized values of x,
f = [f(x_{1}) f(x_{2}) f(x_{3}) ... f(x_{N})]^{T}
then it's clear that the derivative operator will look something like:
D = [1 1 0 0 ...
0 1 1 0 ...
0 0 1 1 ... ]/Δx where Δx = x_{N+1}x_{N} .
It turns out that this almost works, but we really need the derivative operator to be symmetric. I'm not sure exactly why, but the practical problem is that D^{N} gets shifted farther and farther to the right as we raise N with this asymmetric version. A better form is to use a symmetric difference:
f'(x) = lim (h → 0) of (f(x+h)  f(xh)) / (2h) f'(x) ≈ (f(x+Δx)  f(xΔx))/(2 Δx)
That does work, but unfortunately it doesn't give a very good form of D^{2} for the second derivative, which is what we need. I'll just punt and define the second derivative estimator directly:
f''(x) = lim (h → 0) of (f(x+h)  2 f(x) + f(xh)) / h^{2} f'(x) ≈ (f(x+Δx)  2 f(x) + f(xΔx))/(Δx)^{2}
which, in matrix form, looks likeD^{2} = [ 2 1 0 0
1 2 1 0
0 1 2 1
0 0 1 2 ]/(Δx)^{2} which is symmetric, as desired.
(I'm not sure if there's a good form of the first derivative operator D that will give a nice form of the second derivative D^{2}. Perhaps having D and D^{2} is enough to compute nice versions of all higher derivatives.)
One of the simplest problems in onedimensional quantum mechanics is the "square well", where the potential energy is zero inside of a certain range and nonzero (or infinite) outside:
V(x) = 0 if x < L
V_{0} otherwise Now we have everything we need to put together H = D^{2} + V (ignoring all physical constants). Then we solve for the eigenfunctions and eigenvalues of H, as described in my previous entry.
Without further ado, we find (numerically!) the usual eigenfunctions:
And the eigenvalues scale like n^{2}, as expected:
Anyway, try playing with the complete matlab program. It's fun! Just put in a potential, run the program, and watch the eigenfunctions converge...
Notes: I assume this would be totally impractical with problems of greater than 1dimension, though of course for separable problems, you can solve for each separable part individually.
 There's no reason, necessarily, to do this all in the x basis. We could instead solve for the fourier or laguerre series of the eigenfunction, which might converge faster and/or have a nicer form for the derivative.
