You are viewing

 Tobin's Lab Notebook [entries|archive|friends|userinfo]
Tobin Fricke's Lab Notebook

 [ userinfo | livejournal userinfo ] [ archive | journal archive ]

Github "gist" integration with Livejournal [Feb. 27th, 2013|06:28 pm]
 [ Tags | github, livejournal ]

It turns out that Livejournal recognizes github gist URLs, and will "embed" the resulting gist into an LJ entry.

Here's the relevant changelog: http://changelog.livejournal.com/16367242.html It's all done in javascript.

Example:

https://gist.github.com/3861703

From Matlab to Python [Feb. 27th, 2013|05:38 pm]
 [ Tags | matlab, python ]

Since college, Matlab has been my go-to environment for all kinds of numerical simulations, analysis, and plotting. Matlab is an exceptionally well-designed environment and I love it.

The downside to Matlab is that it is expensive. Happily my employer provides not only a Matlab license but also licenses to the many add-on toolboxes that quickly become indispensable. Nonetheless, since the license server is on the network, work grinds to a halt when sitting with the laptop on a train or an airplane or anywhere else without network access.

I've heard tell that many of the useful features of Matlab have been ported over to Python. The idea of a free, open-source environment that's just as good as Matlab is very appealing. To be honest, I'm not really sure what's needed to make this Python environment work, or really, what all the pieces are. I've heard of matplotlib (for making Matlab-like plots), SciPy, NumPy, and Pylab. Here's a first notebook entry in trying to sort all of this out.

Here's what I have so far:

First, install "matplotlib" and numerical python ("numpy"). Matplotlib is the package that lets us make nice Matlab-style plots, and numpy contains lots of Matlab-like numerical functions. They work together... somehow. On Ubuntu, installation is just one shell command:
sudo apt-get install python-matplotlib python-numpy
As a first simple task, let's plot an Airy function. Here's my equivalent Matlab script:

% Matlab code to plot a cavity resonanceF = 10; % Finessef = linspace(-0.5, 1.5, 201);P = 1./(1 + (2/pi) * F^2 * sin(pi*f).^2);plot(f, P);xlabel('free spectral ranges');ylabel('power buildup');
And now the python:
# Python code to plot a cavity resonance import numpy as numpyF = 10f = numpy.linspace(-0.5, 1.5, 201)P = 1 / (1 + (2/numpy.pi) * F**2 * numpy.sin(numpy.pi * f) ** 2)import matplotlib.pyplot as pltplt.plot(f, P)plt.xlabel("free spectral ranges")plt.ylabel("power buildup")plt.show()
The package names (numpy and plt) make that code a bit verbose and cumbersome. I'm not sure whether it's considered good style, but it's possible to import numpy and the plotting library into the default namespace. The resulting code is almost exactly the same as Matlab, except the power operator is ** instead of ^ and you need to call show() to make the plot appear. Also, the regular division operator seems to work in Python (instead of Matlab's element-wise ./ operator).
# Python!from numpy import *from matplotlib.pyplot import *F = 10f = linspace(-0.5, 1.5, 201)P = 1/ (1 + (2/pi) * F**2 * sin(pi*f)**2)plot(f, P)xlabel("free spectral ranges")ylabel("power buildup")show()
To run that, I just started the regular python interpreter (by typing "python" at a command prompt) and typed it in by hand.

The results:

That's Matlab's plot window on the left, and Python's Matplotlib on the right. Not bad!

talk: intro to state space [Dec. 10th, 2012|12:02 am]
 [ Tags | control systems, talks ]

At the recent GEO interferometer sensing and control meeting in Hannover, I gave a short talk titled "Introduction to State Space techniques". It's really a very brief intro, whose main purpose is to introduce the state observer/controller structure. You can find the slides and a toy demo Matlab script here: https://github.com/tobin/statespace-intro-talk

 Chaos Pendulum [Dec. 7th, 2012|06:56 pm] A nicely built double pendulum from Dan Busby. Link Leave a comment

TeX input mode [Oct. 16th, 2012|11:35 pm]
 [ Tags | emacs, tex, unicode ]

I've often wished for an easier way to enter unicode math symbols, for example by typing the LaTeX code. Instead the only way I know is to google for the symbol, which has got to be most complicated way of entering a symbol.

But I just found out that EMACS has a "TeX input mode", where you just type TeX symbols and they magically turn into unicode symbols.

For example, if you type

\forall x \in R, x^2 \geq 0

you get

∀ x ∈ R, x² ≥ 0

Awesome! If only I could get this input mode system-wide.

Invoke it with M-x set-input-method and then TeX. I found out about this via this stackoverflow answer.

TIL main() takes a third argument [Oct. 10th, 2012|11:16 am]
 [ Tags | c, programming, unix ]

It turns out that the environment pointer is passed to main() as the third argument—I had no idea! This seems like something I must have read once upon a time in something like The Unix Programming Environment but I seem to have forgotten.
/* This program prints out the environment in KEY=VALUE format,
one variable per line: */

#include <stdio.h>

int main(int argc, char **argv, char **envp) {
while (*envp)
printf("%s\n", *envp++);
return 0;
}
https://gist.github.com/3861703

poles [Mar. 21st, 2012|11:56 am]
 [ Tags | complex analysis, control systems ]

Bernard Friedland's explanation of the origin of the word "pole" (in Control System Design):
The roots of the denominator [of a rational function] are called the poles of the transfer function because H(s) becomes infinite at these complex frequencies and a contour map of the complex plane appears as if it has poles sticking up from these points. [emphasis mine]

H/(1+HG) [Jul. 31st, 2011|01:35 pm]
 [ Tags | control systems ]

The Barkhausen Stability Criterion is simple, intuitive, and wrong.

http://web.mit.edu/klund/www/weblatex/node4.html

L1 LSC [Jun. 30th, 2011|09:39 pm]
 [ Tags | ligo ]

For future reference, here is a screen-shot of the LIGO Livingston (L1) length sensing and control (LSC) control screen in MEDM, while L1 was in low-noise (detection) mode. There's a full-size version too; also in github.

What this is: this is the control panel for the servos that control the mirrors in LIGO. Inputs come in on the left (from photodiodes). In the middle there are a bunch of filters with complicated transfer functions. Outputs go out to the right, pushing on the mirrors.

on the eigenfunctions and eigenvalues of the fourier transform [Jun. 22nd, 2011|02:37 pm]
 [ Tags | math, physics ]

Recently my interest was piqued in the eigenfunctions of the Fourier transform. The Fourier transform is a linear operator on a space of functions, so it has eigenvectors and eigenvalues: functions who are their own Fourier transform. I think many people know that "the Fourier transform of a Gaussian is a Gaussian", but the other eigenfunctions are not so well known.

It is relatively easy to show that taking the Fourier transform four times in succession is an identity operation. If you remember that taking the complex conjugate in the frequency domain is equivalent to time reversal in the time domain, this is easy to see. Taking the Fourier transform twice gives you time reversal, so taking it four times gives you the identity.

This means that the eigenvalues of the Fourier transform have to obey x^4 = 1, i.e. the eigenvalues of the Fourier transform are the 4th roots of unity: {1, i, -1, -i} (ref) This, of course, agrees with our knowledge that the Fourier transform is unitary. This is Parseval's theorem: the RMS of a function and its transform are equal.

With each eigenvalue we can associate a set of eigenfunctions who have that eigenvalue. Call these sets H0, H1, H2, H3. The Fourier transform of a function in Hn is just that function times an eigenvalue of i^n. We can define the Fourier transform in terms of these sets:

F{f} = f |H0⟩⟨H0| + i f |H1⟩⟨H1| i f |H2⟩⟨H2| - i f|H3⟩⟨H3|

where |Hn⟩⟨Hn| is the projection operator onto the subspace Hn.

It is curious to me that there are only four eigenvalues, and thus these spaces must be very big. There must be very many ways to parametrize each of the families Hn--very many different bases.

It turns out that one such basis is very familiar to physicists: Hermite-Gauss functions (i.e. a Gaussian multiplied by a Hermite polynomial) are eigenfunctions of the Fourier transform. These show up very often in physics; two particular examples come to mind:

1. the energy eigenstate wavefunctions of the quantum simple harmonic oscillator
2. the Hermite-Gauss modes of laser resonators

In his book "Fourier Analysis", Javier Duoandikoetxea (what a name!) (page 22, available on the amazon preview, search for "Eigenfunctions") tells us that the Hermite-Gauss functions provide a complete basis for L^2, i.e. the space of square-integrable functions:

h_n(x) = ((-1)^n / n!) exp( π x^2) (d/dx)^n exp(-π x^2)

F{h_n} = (-i)^n h_n

Let e_n be a normalized version of h_n:

e_n = h_n / || h_n || = Sqrt[(4π)^(-n) Sqrt[2] n!] h_n

Then {e_n} is an orthonormal basis for L^2, and the Fourier transform may be written

F{f} = sum of (-i)^n ⟨f|en⟩ over n in ℤ

[Duoandikoetxea says that this is the approach taken by Norbert Wiener in "The Fourier Integral and Certain of its Applications".]

Why do these Hermite-Gauss functions show up in the physical situations mentioned earlier? If we transform to unitless variables, the Hamiltonian of the harmonic oscillator is simply:

H = x^2 + p^2

This equation is symmetric under interchange of x and p. Finding the energy eigenstates means solving for the eigenstates ψ such that E ψ = H ψ where E is a scalar. Symmetry under interchanging x and p means the coordinate-space and momentum-space representations of the wavefunction must be the same. How do we transform the wavefunction from position space to momentum space? We take the fourier transform. Thus the energy eigenstates of the harmonic oscillator must also be eigenfunctions of the Fourier transform.

But why these Hermite-Gauss functions in particular? We can take any linear combination of functions all in the same subspace Hn and get an eigenfunction of the Fourier transform, but this won't in general be an energy eigenstate of the SHO. After all, the Fourier transform has only four eigenvalues, but the SHO Hamiltonian has an entire ladder of eigenvalues.

What other properties are needed to uniquely define the Hermite-Gauss functions? (Related question on math.SE: How do I compute the eigenfunctions of the Fourier Transform?)