Tobin Fricke's Lab Notebook (nibot_lab) wrote,

finding eigenfunctions via numerical iteration

Suppose we have an operator (or matrix) H and we want to find its eigenvectors and eigenvalues numerically. What do we do?

My first thought was to iterate. After all, an eigenvector/eigenvalue pair (v,λ) is a fixed point of the map

v maps to (H/λ)v

and iterated maps converge, if they converge at all, to fixed points. I imagined this algorithm:
v ← a random vector
for some number of iterations,
   v ← H v
   vv / (v dot v)   # make v a unit vector again
end
It turns out, however, that this will converge to the eigenvector of H with the largest magnitude eigenvalue (if there is a largest eigenvalue). This is pretty easy to see if you think about your initial random vector v being written as a linear combination of eigenvectors of H. On each iteration of the loop, each component of v gets weighted by the corresponding eigenvalue; eventually the eigenvector component with the greatest eigenvalue will dominate.

I have in mind quantum mechanics, in which H will usually have a smallest eigenvalue (corresponding to the ground state) but not a greatest one. So, how do we turn this around?

It turns out that if v is an eigenvector of A with eigenvalue λ, then it's also an eigenvector of A-1, with eigenvalue 1/λ. This is also easy to see:
A v = λ v             definition of eigenvector/eigenvalue
A-1 A v = λ A-1 v     multiply both sides by A inverse
A-1 v = (1/λ) v       re-arrange
So to find the ground-state eigenfunction of H, we can simply apply the iteration algorithm to the inverse of H.

How do we find the next highest eigenvalue and corresponding eigenvector (the excited states)? Let's consider only the case that H is hermitian, as are all operators corresponding to observables in quantum mechanics. For a hermitian matrix, we know that eigenvectors corresponding to distinct eigenvalues are orthogonal.

This property allows us to bootstrap ourselves and climb up the ladder of eigenvalues. To find the next higher eigenvalue of H after we've found all the lower ones, we just have to add to our iteration the additional constraint that we want v to be orthogonal to all the earlier eigenvectors. This just means that we subtract the projection onto each of them.

This leads to the following Matlab code:
for jj=1:N_eigenvectors_to_find
    % start with a random vector
    v = rand(size(x));
    for ii=1:N_iterations
        % Apply the map v <-- A v
        v = Hinv * v;
        % Subtract the projections of this vector onto
        %  all previously found eigenvectors
        for kk=1:(jj-1)
            v = v - vs(:,kk) * vs(:,kk)' * v;
        end
        % Make this a unit vector by dividing by its length
        v = v/sqrt(v'*v);
    end
    % Store the newly found eigenvector in our list
    vs(:,jj) = v;
end
I just used this algorithm to find the first several eigenstates of several simple problems in one-dimensional quantum mechanics, with great satisfaction. The one part that makes me a little queasy is the need to invert H, but I have an idea for a scheme utilizing the variational principle to circumvent that.



Edit: I should add that there is a built-in function that accomplishes this and more, doubtlessly via better algorithms: eigs.
Tags: linear algebra, matlab, quantum mechanics
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your IP address will be recorded  

  • 8 comments