|finding eigenfunctions via numerical iteration
||[Feb. 13th, 2009|02:05 pm]
Suppose we have an operator (or matrix) H and we want to find its eigenvectors and eigenvalues numerically. What do we do?
My first thought was to iterate. After all, an eigenvector/eigenvalue pair
(v,λ) is a fixed point of the map
v maps to (H/λ)v
and iterated maps converge, if they converge at all, to fixed points. I imagined this algorithm:
v ← a random vector
for some number of iterations,
v ← H v
v ← v / (v dot v) # make v a unit vector again
endIt turns out, however, that this will converge to the eigenvector of H with the largest magnitude eigenvalue (if there is a largest eigenvalue). This is pretty easy to see if you think about your initial random vector v being written as a linear combination of eigenvectors of H. On each iteration of the loop, each component of v gets weighted by the corresponding eigenvalue; eventually the eigenvector component with the greatest eigenvalue will dominate.
I have in mind quantum mechanics, in which H will usually have a smallest eigenvalue (corresponding to the ground state) but not a greatest one. So, how do we turn this around?
It turns out that if v is an eigenvector of A with eigenvalue λ, then it's also an eigenvector of A-1, with eigenvalue 1/λ. This is also easy to see:
A v = λ v definition of eigenvector/eigenvalue
A-1 A v = λ A-1 v multiply both sides by A inverse
A-1 v = (1/λ) v re-arrangeSo to find the ground-state eigenfunction of H, we can simply apply the iteration algorithm to the inverse of H.
How do we find the next highest eigenvalue and corresponding eigenvector (the excited states)? Let's consider only the case that H is hermitian, as are all operators corresponding to observables in quantum mechanics. For a hermitian matrix, we know that eigenvectors corresponding to distinct eigenvalues are orthogonal.
This property allows us to bootstrap ourselves and climb up the ladder of eigenvalues. To find the next higher eigenvalue of H after we've found all the lower ones, we just have to add to our iteration the additional constraint that we want v to be orthogonal to all the earlier eigenvectors. This just means that we subtract the projection onto each of them.
This leads to the following Matlab code:
% start with a random vector
v = rand(size(x));
% Apply the map v <-- A v
v = Hinv * v;
% Subtract the projections of this vector onto
% all previously found eigenvectors
v = v - vs(:,kk) * vs(:,kk)' * v;
% Make this a unit vector by dividing by its length
v = v/sqrt(v'*v);
% Store the newly found eigenvector in our list
vs(:,jj) = v;
endI just used this algorithm to find the first several eigenstates of several simple problems in one-dimensional quantum mechanics, with great satisfaction. The one part that makes me a little queasy is the need to invert H, but I have an idea for a scheme utilizing the variational principle to circumvent that.
Edit: I should add that there is a built-in function that accomplishes this and more, doubtlessly via better algorithms:
Oh nice! I was just looking for a way to figure out the second largest eigenvector through this type of process.
Oh wait, you said orthogonal eigenvectors. That may not work for me after all.
Yeah, I think H has to be hermitian for this to work.
I'm friending you.
Hello, welcome! Btw, how'd you happen to come across this journal?
's journal, somehow.
(I hope we can turn off this recaptcha for every comment I make)