Homework #4: "Look ma! I computed my first eigenvalue all on my very own"
Posted 3/5; Due 3/24
Background
We've been talking about methods for computing eigenvalues and eigenvectors
of symmetric matrices. As an interesting application, we noticed that we
could compute the vibrational modes of a thin plate as an eigenproblem. See,
for example, the Wikipedia article on
the Helmholtz
equation.
An interesting question comes from the following observation: if I know the
shape of the plate, I can compute the eigenvalues of the associated Helmholtz
equation (the frequencies of the vibrations). So, if I know the eigenvalues,
can I compute the shape? That is, a violin sounds different than a guitar
sounds different than a piano even if they're all playing middle "C". Are
there any two distinct instrument shapes that sound the same? (See also the
Wikipedia article,
"Hearing
the shape of a drum".)
We're going to discretize the Helmholtz equation, and find the eigenvalues
and eigenvectors of that discrete problem instead. The discrete problem will
be a fair representative of the continuum only for the lowest frequency
eigenmodes. Since these also have the smallest wave number (the longest
wavelength), and the eigenvalue goes as the square of the wave number, these
modes have the smallest eigenvalues. (The square of the wave number is
non-negative, and there are no zero modes, so these are all positive and
"smallest" means "closest to zero".)
See also:
The software used in this homework consists of:
- Triangle, a
(constrained) Delaunay mesher for polygonal domains
by Jonathan Shewchuk;
sample calls are included as the last line of every ".poly" file; instructions
are on the Triangle home page
- some homegrown Perl to process the mesh and spit out a mass matrix
- some homegrown software (yours and mine) in Matlab to read in the matrices
and spit out eigenvalues and eigenvectors
- some homegrown Perl to read the eigenvectors and mesh and spit out
pretty pictures in EPS
Questions
The basics
- Code the power method, deflation, and the inverse power method. (If your
notes don't suffice, talk to me and/or consult the textbook.) I've provided
an API to Matlab; since so many constructs are built-in to Matlab (sparse
matrices, matrix-vector multiply, solving (sparse) linear systems), it'd
probably just be easiest to work in that environment. (But if you wanna work
with something else, lemme know, and I can help.)
- Calculate smallest six eigenvalues of the matrices associated with the
lowest resolution discretizations of the the isospectral plates. Verify the
eigenvalues of the two plates are the same (to within your chosen error
tolerance). Plot the associated eigenmodes.
Things I'd like to see you do if you have time
- Do the same (six smallest eigenvalues and their associated eigenmodes) for
the square plate. Note: the second smallest eigenvalue is repeated. Here's a
chance to test your code on that case. (There are also repeated eigenvalues
among the higher frequency modes, too.) You can also try out a circular or
triangular plate. I'll try to also post an irregularly shaped plate with
distinct eigenvalues.
- Calculate the operation count and/or time it takes for each eigenmode as a
function of the error tolerance on the eigenvalues. Plot it. Do the same for
time versus number of nodes in the mesh (a variety of resolutions of the
meshes will be provided).
- Using the built-in "eig" or "eigs" as a reference, calulate the error in
your computed eigenvalues versus the number of iterations. That is, record
your intermediate guess-timates at each iteration of the (inverse) power
method; plot how closely they get to the "true" value as iterations go by.
- Do the same for the errors in the eigenvectors versus the number of
iterations. Try plotting the errors in the eigenvectors versus the errors in
the eigenvalues.
- How does the quality of the initial guess impact the number of iterations?
The error in the resulting approximation for a fixed error tolerance? That
is, try our "good" guess and compare it with a random guess.
More challenging places to explore
- Clustered eigenvalues give indigestion to all sorts of eigenvalue algorithms
(and to iterative solvers like Krylov solvers). What happens with the power
method and deflation as applied to the square plate problem?
- Is the eigenvalue problem well-conditioned? What about the associated
eigenvector problem? Can you construct examples that show these behaviors?
- There's the continuum problem, the discrete version, and your approximation
of the discrete version (the result of the (inverse) power method). Does it
make sense to try to set the error tolerance of the power method smaller than
the discretization error? How does that work?
- Perhaps as a warm-up to the previous question: you can analytically compute
the eigenvalues for the square plate. How do these compare with your numeric
values? (And how does your comparison change as the power method's error
tolerance decreases? What about as the number of nodes increases?) What
about for an L-shaped plate? What does this have to do with the isospectral
plate? (How do your eigenvalues compare to the published values?
2.53794399980, 3.65550971352, 5.17555935622, 6.53755744376, 7.24807786256,
9.20929499840, ...)
Notes
As before, the first thing I'm gonna do is test your code. If it doesn't
work, you better already know that and have told me in advance. In that
case, you should at least:
- identify that the code's output is incorrect; what about the output tells
you it's incorrect?
- figure out why it's incorrect
- figure out a fix
Note that the last two are often different things.
Also:
- If a question needs clarification, please let me know.
- I'm willing to provide lotsa help and/or guidance. But you need to *ask*.
- If you rewrite any of the preprocessing or post-processing routines to make
them run better (or to make the code clearer), I'd appreciate getting a copy
for future iterations of this class.
Ins and outs of the code
For each leaf directory in the tree (for each shape at a given mesh resolution):
- I used "triangle" with a ".poly" file that just describe the boundaries of
the given shapes; the exact command lines are printed as a comment in the last
line of the outputted ".poly" files.
- I then used "ele-to-mm" to produce a text file
(".mm") that lists rows, columns, and entries in a sparse matrix that
represents a discretized Laplacian on the mesh (the ".ele" file is the only
input).
- Copy your "myEigProc.m" file to that directory (or
make sure it's on your "path" in Matlab) and "cd" to that directory in Matlab.
- Type "getEigs('bilby')" on the Matlab command line,
substituting some other name for "bilby" as appropriate. (Note the single
quotes are important in Matlab -- they indicate a string.) It'll spit out six
".node" files that contain the old node data plus an "attribute" that
indicates the associated entry from the eigenvector. If you don't have access
to Matlab, Octave is an excellent
alterantive.
- From a shell, run the Perl script "node-to-eps
bilby" (again subbing for "bilby" as appropriate). The script takes these
new ".node" files and spits out a pretty picture in EPS. Don't worry about
normalizing your eigenvectors; the script scales them to have size 1 in the
infinity norm.
Test cases
Check your results using "eig" (dense) or "eigs" (sparse) in Matlab. When
you make your plots, visually check that they jibe with your physical
intuition.
All the meshes are in the "plates" directory. There's
a gzipped tarball for each shape. Each leaf directory in the tree contains a
copy of each of the necessary files (so you don't need to go copying the
scripts all around).