My first thought was to iterate. After all, an eigenvector/eigenvalue pair

`(`**v**,λ)

is a fixed point of the map**v** *maps to* (H/λ)**v**

and iterated maps converge, if they converge at all, to fixed points. I imagined this algorithm:

It turns out, however, that this will converge to the eigenvector of H with the largest magnitude eigenvalue (if there is a largest eigenvalue). This is pretty easy to see if you think about your initial random vectorv← a random vector for some number of iterations,v← Hvv←v/ (vdotv) #make v a unit vector againend

**v**being written as a linear combination of eigenvectors of H. On each iteration of the loop, each component of

**v**gets weighted by the corresponding eigenvalue; eventually the eigenvector component with the greatest eigenvalue will dominate.

I have in mind quantum mechanics, in which H will usually have a smallest eigenvalue (corresponding to the ground state) but not a greatest one. So, how do we turn this around?

It turns out that if

**v**is an eigenvector of A with eigenvalue λ, then it's also an eigenvector of A

^{-1}, with eigenvalue 1/λ. This is also easy to see:

ASo to find the ground-state eigenfunction of H, we can simply apply the iteration algorithm to thev= λvdefinition of eigenvector/eigenvalue A^{-1}Av= λ A^{-1}vmultiply both sides by A inverse A^{-1}v= (1/λ)vre-arrange

*inverse*of H.

How do we find the next highest eigenvalue and corresponding eigenvector (the excited states)? Let's consider only the case that H is hermitian, as are all operators corresponding to observables in quantum mechanics. For a hermitian matrix, we know that eigenvectors corresponding to distinct eigenvalues are orthogonal.

This property allows us to bootstrap ourselves and climb up the ladder of eigenvalues. To find the next higher eigenvalue of H after we've found all the lower ones, we just have to add to our iteration the additional constraint that we want

**v**to be orthogonal to all the earlier eigenvectors. This just means that we subtract the projection onto each of them.

This leads to the following Matlab code:

for jj=1:N_eigenvectors_to_findI just used this algorithm to find the first several eigenstates of several simple problems in one-dimensional quantum mechanics, with great satisfaction. The one part that makes me a little queasy is the need to invert H, but I have an idea for a scheme utilizing the variational principle to circumvent that.% start with a random vectorv = rand(size(x)); for ii=1:N_iterations% Apply the map v <-- A vv = Hinv * v;% Subtract the projections of this vector onto % all previously found eigenvectorsfor kk=1:(jj-1) v = v - vs(:,kk) * vs(:,kk)' * v; end% Make this a unit vector by dividing by its lengthv = v/sqrt(v'*v); end% Store the newly found eigenvector in our listvs(:,jj) = v; end

**Edit:**I should add that there is a built-in function that accomplishes this and more, doubtlessly via better algorithms:

`eigs`

.