## Where do axioms come from?

In the last post, we talked about what math is. To me, math is a quest of understanding what must be.

The basis for this quest are the axioms and definitions of mathematics. Definitions describe what we are talking about, while axioms describe what we assume those objects can do.

Where do those axioms and definitions come from?

Since math is taught so authoritatively, it can seem that the definitions and axioms of mathematics are part of what must be. That may be true to some extent, but that is not how math is done.

As we try to increase our mathematical understanding, our needs change. We realize that certain ideas or definitions we used before weren’t quite precise or rigorous enough to deal with the questions we want to ask now. Sometimes, we find out that our previous understanding was simply lacking.

In this post, I’d like to give some of the motivating reasons for the axioms and definitions we commonly use in mathematics. The reasons are general and overlap, and are probably not exhaustive.

The first of these reasons is trying to codify intuitive ideas.

For an example we can go back to calculus.

The idea of a “continuous function” is fairly simple; a function is continuous if, when you graph it, you don’t have to lift up your pencil.

For many purposes, that intuition is sufficient. But, if you needed to, how could you make this definition precise?

Though calculus was invented in the mid 1600’s, it wasn’t until 1817 that Bernard Bolzano gave the modern definition of continuity. His “epsilon-delta” definition also demonstrates another common occurrence for new, precise mathematical definitions. Even though the intuition for them is fairly clear, the technical details can be very confusing.

The more intuitive way to say his definition is “A function $f(x)$ is continuous at a point $x_0$, if inputs near the original input $x_0$ give outputs near the original output $f(x_0)$.” This makes more precise what we mean when we say a continuous function has no jumps.

But the precise way of stating this is “A function $f(x)$ is continuous at a point $x_0$ if, for any $\epsilon >0$, there exists a $\delta>0$ so that, if $|x-x_0|<\delta$ implies that $|f(x)-f(x_0)|<\epsilon$.”1

One of the first jobs of any math major beginning his or her “proofs” classes is to really internalize this definition. It, and its variations, come up all over the place.

Axioms and definitions are sometimes invented trying to answer the question, “what makes this proof work?”

It almost feels like cheating–you know the outcome you want, so just assume the things that makes it work!

If you’ve taken calculus, probably the most important theorem you learned was the Fundamental Theorem of Calculus. One way to state this theorem is “If $F(x)$ has a (continuous) derivative, then $F(b)-F(a) = \displaystyle\int_a^b F'(x)\,dx$.” In other words, the integral of the derivative is the original function.

But the assumption that $F(x)$ has a continuous derivative is stronger than really necessary. For instance, the theorem still works for $F(x) = |x|$, even though $|x|$ does not have a derivative (i.e., a well defined slope) at $x=0$.

So what property does a function need to make the fundamental theorem work? Somewhere in the proof, at some point you need to use that $F(x)$ has a continuous derivative. But if you look closely, you find you don’t quite need that condition. Instead, you need something a bit weaker. That precise condition is just given a name, absolutely continuous. (You can see the definition here on Wikipedia.)

Absolute continuity is not a basic, obvious definition or idea. It’s not very elegant in anyone’s view. It’s simply the condition that makes the proof work.

“What makes it work” might not be very elegant, but it is how math is done.

If we don’t know how to prove the theorem we want to, we’ll often ask, “What extra condition could we assume that would it possible to prove this theorem?” And then we assume that condition holds, and often give it a name like “tame” or “well-behaved.” The conditions aren’t special or elegant–but they work.2

Another way that definitions are invented is when mathematicians want to generalize an idea to a more general situation. Another way to say this is that mathematicians are trying to somehow identify the intrinsic something of an idea.

This is a major theme of modern mathematics. “What does it mean to be a shape?” “What does it mean to multiply things?” These two questions lead to the complete reformulations of branches of mathematics.

Until Bernhard Riemann, a shape was always visualized in the plane or in space (or perhaps a higher dimensional $\mathbb{R}^n$.) But what makes a shape a shape? Riemann asked this question, and decided that the property that makes a shape a shape is that, at any point of the shape, you can travel in a certain number of directions. (In 3 dimensions, this would be up/down, left/right, and forward/backward.)

The usual visualization of shapes in space was a crutch that distracted us from the intrinsic properties of that shape. These “many-fold quantities,” as Riemann called them, or manifolds, as we call them now, have become the basis for geometry. (We’ve talked extensively about manifolds in this blog, starting here.)

Multiplying numbers has been done for as long as math has been done. More recently, multiplication of matrices has become useful. But what makes multiplication multiplication?

Answering that question leads to the idea of a group, the basis of the field of abstract algebra. A group is a bunch of things that you can multiply. You don’t really care what these things are (matrices, functions, numbers, shapes, symmetries, etc.), as long as you know how to “multiply” them. The general rules for what multiplication must do are the axioms of a group.3

These kinds of generalization often seems weird and/or useless when you’re first introduced to it. Even worse, it always feels like you’re adding a layer of complexity to something that is already complicated enough.

But stripping away the extra details and focusing on the core ideas turns out to be very valuable. First, sometimes it makes it easier to prove results you care about. Second, by unifying very disparate ideas (such as matrix multiplication and rotations and normal multiplication), if you can prove a theorem about groups in general, than it applies to all of these very different situations.

Finally, sometimes we have to come up with new axioms because our old ones were just plain wrong.

Because math is an investigation into what must be, we really don’t like when there are contradictions. In fact, we feel like there shouldn’t ever be contradictions. After all, we proved everything, right?

Usually they’re just an indication that you made a mistake somewhere in your reasoning. (I’m intimately familiar with that one…)

And often, mathematical theorems or examples can seem paradoxical, but really the only problem is with your intuition.

But occasionally, real problems are found.

One of the most prominent examples of this is Russell’s paradox.

Intuitively, a set is any collection of objects you can define. For instance, the integers between 1 and 5 are a set, $\{1, 2, 3, 4, 5\}$. All the natural numbers is a set. You can have more complicated sets, like the set of all sets of numbers.

Georg Cantor, among others, had enumerated what things you could do with sets.4 But a naive interpretation of sets, which works well enough for most purposes, leads to contradictions.

Russell’s paradox is this: Consider the set $R$, which is the set of all sets which are not in themselves.

Yeah, that’s weird. Maybe an easier one to get your head around is “the set of all sets.” Since it’s a set of all sets, and it is a set, the set of all sets has to contain itself.

The set $R$, the set of all sets which are not in themselves, is even weirder. But (naively) $R$ is a set because we can define it.

Is $R$ in itself?

If $R$ is in $R$, then $R$ is a set which contains itself. But that means (since $R$ is the set of all sets which are not in themselves) that $R$ can’t be in $R$.

Okay, so maybe $R$ is not in $R$. If it isn’t, though, the definition of $R$ (again, the set of all sets which are not in themselves) means that $R$ must be in $R$!

In other words, $R$ can’t be in $R$, but that means it must be in $R$, but that means it can’t be in $R$, but that means it must be in $R$, but that means…

This is a lot like the infamous statement, “This statement is false.”5

In order to clear up Russell’s paradox, along with a family of other paradoxes that come along with a more naive approach to set theory, new axioms were needed.

Over the next few decades, the now-standard Zermelo-Fraenkel axioms of set theory were developed. These axioms are designed to allow you to do most things you think you should be able to do with sets, like combine them and compare them and such, but they avoid paradoxes that can creep in if you try to allow everything.

To conclude, axioms and definitions are invented for many reasons, ranging from an attempt to make precise an intuitive idea to an attempt to remove paradoxes.

But math works, as long as we pick reasonable axioms, and we can use it to learn everything that must be.

Right?

Actually, it’s not quite so simple. There are fundamental limits on what we can use mathematics to understand. The only other option is that math is self-contradictory.

That is the content of Gödel’s incompleteness theorems. And that’s what we’ll talk about next time.

Sorry for the delay for this post. We’ve been writing posts weekly for six months. Unfortunately, that turns out to be an unsustainable pace with all the other things we have to get done. We’ll continue to post, but it will be less often than before. Feel free to subscribe to get an email when we post!

1. And this is not the most confusing definition. Another definition (often, but not always equivalent) is “The inverse image of an open set is open.” I don’t want to define those terms here, but this definition, though very useful, is even less intuitive than Bolzano’s.
2. My impression is that this is how “Hilbert spaces” got their name. Hilbert spaces are infinite dimensional vector spaces, with a way to measure lengths of vectors, and angles between them. That is all very natural. But Hilbert spaces have the additional property that they are “complete,” essentially meaning that there are no “vectors missing” from the space. This condition is very important in being able to prove anything about infinite dimensional vector spaces. Hilbert had a number of papers about these complete vector spaces, and others found them useful, and so started calling them Hilbert spaces.
3. There are four axioms of a group. 1. Multiplication of things in the group have to stay in the group. 2. Multiplication is associative, i.e., $(a\cdot b)\cdot c = a\cdot(b\cdot c)$. 3. There is a “1,” meaning anything you multiply by “1” stays the same. 4. Everything has an inverse, so that if you multiply them together, you get “1.” There are lots of examples of groups, such as the positive numbers, or invertible matrices. But there are less obvious examples, like the set of all rotations in space, $SO(3)$. These are a group since, if you do one rotation, then another, that is the same as doing one big rotation. (Doing one rotation, then another, i.e., composition, is the “multiplication.”) The “1” rotation is the rotation of zero degrees, i.e., doing nothing. And the inverse is undoing the rotation you just did.
4. I don’t say “wrote down axioms” because he never actually wrote down precise axioms for his set theory.
5. Perhaps even more accurately, the set $R$ is like “the smallest number that can’t be described in less than 13 words.” It seems to make sense, but, looking closer, there’s obviously some sort of problem with this number.

## What is math?

What is math?

Most people’s conception of math was drilled into them during grade school.

In my experience, grade school math goes something like this: The teacher says that we need to calculate a thing. He then shows how to calculate that thing, with seven slight variations. Your homework is to calculate six of each of the variations. The test will have five of those seven.

After a decade of this, most walk away thinking that math is calculation. And because of the rote way the material was introduced, many get the impression that math is set in stone. If you perform a particular set of arcane, incomprehensible steps, you will be led to the mythical “right answer.” No other steps are allowed, and heaven help you if you don’t happen to remember the right steps for a particular problem. In that case there is nothing to be done but despair.

And, of course, they believe that all math has been handed down to them from on high, as wisdom from the ancients. It is imperturbable, impenetrable, impeccable.

But that is not what math is.

So, what is math?

Calculation is a useful tool, but it is definitely not what math is.

Math is a quest for understanding. And like any good epic fantasy series, it seems to never quite be finished.1

And the understanding we mathematicians seek is an odd sort of understanding. The goal of science is to understand what is, to describe and understand the universe around them.

Mathematicians, on the other hand, seek to understand what must be.

After all, the questions a mathematician asks are not generally about things that could even exist. Have you ever seen a perfectly straight, infinitely thin line? Or an angle of precisely 90 degrees? But, if I have a perfectly flat triangle with a 90 degree angle, I know the side lengths have a certain relationship, $a^2 + b^2 = c^2$.

And sure, we can count 37 cows, but do the cows care that there are a prime number of them? But 37 is prime, and so the 37 cows cannot be evenly split between more than one person.

I sometimes like to describe this by saying that I, as a mathematician, try to figure out what even God cannot do. Even an all-powerful God cannot create a perfect flat triangle with a 90 degree angle, whose side lengths do not obey the Pythagorean relationship. Neither could He evenly divide 37 cows between more than one person.2

The basis for deciding what must be are the definitions and axioms of mathematics.

Definitions and axioms are different, but very closely related.

Definitions describe the things we talk about. For instance, a straight line (versus a curved one) might be defined as “a line which lies evenly with the points on itself,” as in Euclid.

Axioms describe what we can do with the things we’ve defined. These tend to be very basic, “obvious” things. For example, the axiom of symmetry says that “If $A=B$, then $B=A$.” In this example, you could see the axiom as something you can do (“You can switch the sides of an equation.”) or you could see the axiom as defining what two things being equal really means.

On top of this foundation, mathematics is built with logic. Given the definitions and axioms, certain conclusions follow as inescapable consequences. These conclusions we call theorems or lemmas or propositions.3

Because mathematics is taught in such an authoritative way, it can appear that the definitions and axioms of mathematics are in someway intrinsic, that they have existence outside of the creation of man. It can feel like the axioms and definitions are part of the “what must be” that mathematicians are searching for.

To some extent that may be true, but I don’t think this is completely true, and it’s certainly not how math is done.

When you read a textbook, the most recent thinking of the definitions and axioms that are thought to be important are presented. But that hides to some extent the fact that it took hundreds, or even thousands, or years to decide that those axioms should be the ones to form the foundation of the rest of mathematics.

Math evolves. Math changes. The definitions and axioms we use today are not the same ones that were used by Newton.

Referencing Newton actually brings up a good example of how math changes.

Newton (and Leibniz) invented calculus around 1670. It immediately proved its use in solving any number of important questions in physics and mathematics.

But Newton’s calculus was not built on what we would today consider a rigorous foundation.

In order to explain their ideas, both Newton and Leibniz used some idea of “infinitesimals,” quantities that were infinitely small.

Infinitesimals can be very useful in an intuitive explanation of calculus. (I often use them informally when I teach calculus myself.) And so Newton and Leibniz’s proofs of their results were accepted, even though some were uncomfortable with the idea of an infinitely small quantity.

But as mathematicians delved deeper into the ideas of calculus, it became clear that the infinitesimal arguments weren’t quite complete. There were important theorems that could not be carefully proven because the foundations of calculus were not proven with sufficient rigor.4

Thus, one of the major mathematical projects of the 1800’s was to prove the “soundness” of calculus, and make sure the foundations were correct.

This involved inventing new definitions. For instance, one of the key ideas of calculus is the limit. Informally, the limit asks, “As the inputs get close to a number, what do the outputs get close to?”

The intuition for limits is not difficult; you plug in numbers closer and closer to the one you want, and see if the outputs get close to some other number. But the careful definition for limit that we use today, the $\epsilon-\delta$ (epsilon-delta) definition, was not introduced till the 1820’s by Augustin-Louis Cauchy.

Mathematics is not static, and the axioms and definitions we use are not necessarily natural, sitting there for us to find. As we seek deeper understanding, we often come to a point where we realize our earlier understanding was incomplete, or even incorrect, and we seek to fix the foundations. This has occurred over and over and over again to get to our “fixed” modern ideas of mathematics.5

To summarize, mathematics is a quest for understanding what must be. But the very concepts we try to understand are not set in stone. The objects of mathematics are defined by people, and as we understand them better, the definitions and axioms we base our understanding on change.

In the next post I want to talk more in depth about why these definitions change, and how and why mathematicians come up with new definitions.

This post was mostly about the philosophy of math, which is quite a bit different than my normal post. But as we’ll see in a few weeks, Gödel’s incompleteness theorem is so weird that it is impossible to talk about it without discussing the philosophy of math. Gödel’s theorem puts a fundamental limit on mathematicians’ quest for understanding.

1. I’m looking at you, George R. R. Martin…
2. Well, at least without taking the King Solomon approach and cutting the 37th cow in half!
3. Usually “theorems” are bigger, more important conclusions, while “lemmas” are littler conclusions that are needed along the way to show the theorems are true. Propositions can go either way. On the other hand, sometimes lemmas end up being more important than the theorems.
4. More recently, mathematicians have come up with rigorous methods to talk about infinitesimals, for instance the hyperreal numbers. However, infinitesimal methods are no longer considered standard.
5. Even the work on calculus done in the early 1800’s was not final. The “Riemann” integral, which was the formalization of the integral by Riemann, is what is taught in high schools and early college math. But at the graduate level, we use the “Lebesgue” (Luh-bayg) integral instead, which was introduced in the early 1900’s. Both are rigorous approaches to the integral, but the Lebesgue integral makes a few key lemmas and theorems much easier to prove. The basis of the Lebesgue integral is less intuitive at first, but easier and more powerful in the end.

## Can You Hear the Shape of a Drum? (Part 2)

In the last post, we explained how vibrating strings work.To summarize, the string’s position, given as a function $f(x,t)$, is controlled by a differential equation, $\dfrac{d^2 f}{dt^2} = \dfrac{d^2 f}{dx^2}$.1

The left hand side of this equation, $\dfrac{d^2 f}{dt^2}$, is the vertical acceleration of the string. Meanwhile, $\dfrac{d^2 f}{dx^2}$ measures how much the string is curving at one instant of time, as you move from left to right. The more the string is curving, the bigger this is.The punchline of the last post was that any vibration of the string, including the complicated one above, can be represented by a sum of special “self-similar” solutions to the differential equation. These solutions keep their overall shape, and simply vibrate by scaling up and down.In order to keep their shape, these solutions had to satisfy a special equation, $\dfrac{d^2 f}{dx^2}(x) = -\nu^2 f(x)$, which says that the second derivative of the function (i.e., how much it is bending) must be equal to a constant $-\nu^2$ times the function itself. If we want the string to have length $\pi$, the $\nu$ had to be natural numbers, $\nu 1, 2, 3, \cdots$. The video above are the solutions when $\nu$ is 1, 2, 3, or 7.

The solutions $f(x)$ of $\dfrac{d^2 f}{dx^2}(x) = -\nu^2 f(x)$, which represent the initial states of the string, are eigenfunctions of the “transformation” $\dfrac{d^2}{dx^2}$, with eigenvalues $\lambda = -\nu^2$.

The key observation is that the eigenvalue of the self-similar solutions controls the speed of the vibration. As you can see above, the $\nu=7$ solution vibrates seven times as fast as the $\nu=1$ solution.

The tone you hear from an instrument is related to how quickly the air (and thus the string) is vibrating. The self-similar solutions are the vibrations of the string that produce pure tones — a single note.

Since any vibration is a combination of these basic pure tone vibrations, when the string vibrates, you will hear all the tones of all the pure tone vibrations you combined together, each at different volumes depending on how much of that pure tone vibration you used. For instance, this vibration would have the $\nu=1$ and $\nu=2$ tones in equal amounts:However, these pure tones are the only tones the string can produce.

In other words, the eigenvalues (of $\dfrac{d^2}{dx^2}$) control the tones a string can produce, while the tones a string can produce tell you the eigenvalues. The set of all eigenvalues (of $\dfrac{d^2}{dx^2}$ for a particular string) is called the spectrum.

We finally get to the first question: Can you hear the length of a string?How could we hear the length?

If we vibrate a string at random, our vibration will probably2 be a combination of small amounts of all the pure tone vibrations. If we had perfect pitch (and could hear tones both very low, and infinitely high), we could hear all of those notes, and, from them, deduce the spectrum of the string.

Since the spectrum and the possible tones are equivalent, a more mathematical way to ask the question is: If you know the spectrum of a string, can you somehow calculate its length?3

To answer that, we need to know how the eigenvalues of $\dfrac{d^2}{dx^2}$ change when the length of the string changes.

So far, we’ve assumed the length of the string was $\pi$, for simplicity. In that case, the eigenfuctions associated to the eigenvalue $\lambda = -\nu^2$ was $f_\nu(x) = \sin(\nu x)$. While $f_\nu(x)$ is an eigenfunction for any $\nu$, it could only represent a vibration on our string of length $\pi$ if $\nu$ was 1, 2, 3, etc. That was because we needed both sides of the string to be fixed.If our string is instead length $L$, we will still need both sides of the string to be fixed. For that to happen, we will need $f_\nu(x)$ to be zero at the two ends, $x=0$ and $x=L$. That will happen when $\nu = \dfrac{\pi}{L} n$, where $n$ is some natural number. (Notice that if $L=\pi$, this is exactly what we had before.) In other words, our eigenvalues are $-\dfrac{\pi^2}{L^2} n^2$.

That means you can hear the length of the string.For example, the spectrum for a string of length $\pi$ would be $\{-1, -4, -9, \cdots\}$, and you would hear the associated tones.

The spectrum for a string of length $2\pi$ would be $\left\{-\dfrac{1}{4}, -1, -\dfrac{9}{4}, \cdots \right\}$. Those two spectra (plural of spectrum) are quite different. In fact, you can even tell them apart even by just looking at the first eigenvalue.

You can hear the length of a string just by hearing its lowest tone.4

We’re finally prepared to talk about the original question we had: Can you hear the shape of a drum?As with strings, the vibration of a drum is controlled by a differential equation. Unsurprisingly, it’s very similar to the one for a vibrating string. The function $f(x, y, t)$ representing the position of the drum head at any particular time is more complicated now, though. It depends on time and both $x$ and $y$, since the drum head is two dimensional.

The differential equation is $\dfrac{d^2 f}{dt^2} = \Delta f$. The left hand side, $\dfrac{d^2 f}{dt^2}$ is the vertical acceleration of the drum head, as before.

The right hand side looks different, but mostly just because I’m using notation you may not be familiar with. The symbol $\Delta$ (capital Delta) is called the Laplacian, and is involved in many of the most important differential equations. Since we have two space dimensions for our drum head, $\Delta f = \dfrac{d^2 f}{dx^2} + \dfrac{d^2 f}{dy^2}$, the sum of the second derivatives in the two space directions.5

That means the Laplacian is really just the same thing we had on the right hand side of the equation for a vibrating string. The quantity $\Delta f$ adds up how much the drum head is bending in both the $x$ and $y$ directions.

So, as before, where the drum head is bending the most, it accelerates the most as well, bringing it back toward the resting position.Even on a square drum, solutions to this differential equation can be quite complicated.But, as before, every solution is a combination of small amounts of special self similar solutions.

To find self similar solutions, we again need $\dfrac{d^2 f}{dt^2} = -\nu^2 f$ — the acceleration of the drum head must be equal to a (negative) multiple of the current position.

Since this is the same equation as before, it has the same solution. If we had an initial position $f(x, y, 0)$, the position at time $t$ would be $f(x, y, t) = f(x, y, 0) \cos(\nu t)$. The initial position simply vibrates up and down sinusoidally.The frequency of the vibration, and thus the tone produced by the drum, is controlled by the value $\lambda = -\nu^2$. The larger $\nu$ is, the higher the frequency, and thus the higher the tone produced.

That leaves the hard part of the problem.In order to find initial positions of the drum head that will lead to these simple, self-similar solutions, we need to find solutions to the equation $\Delta f = -\nu^2 f$.

This is again an eigenvalue problem. The Laplacian transforms our function $f$ into a new function. Usually this new function, $\Delta f$ would be unrelated to the original function $f$, but we are looking for those special functions for which this transformation simply scales the original function by some constant $-\nu^2$.

This may not seem that much harder than it was for a string, but it turns out to be very hard to solve explicitly.

For a rectangular drum, it’s not too bad. It turns out the eigenfunctions are just products of sine waves–one in the $x$ direction and one in the $y$ direction. For instance, on a square of side length $\pi$, $f(x,y,0) = \sin(2x)\sin(3y)$ is an eigenfunction with eigenvalue $\lambda = -2^2-3^2 = -13$.You can also find explicit solutions for a circular drum, and for some triangles (equilateral, isosceles right, and 30-60-90), but for just about anything more complicated than those examples, we have no idea how to find explicit eigenfunctions and eigenvalues.6

But, fortunately, finding eigenvalues and eigenfunctions was not the question.

The question was “Can you hear the shape of a drum?” In other words, if you already know the eigenvalues (the spectrum), can you figure out the shape of the drum?

It’s been known for a long time that there are some things we can tell about the drum from the spectrum.

The most “obvious” one is the area of the drum head. A bigger drum head makes a lower tone.

Unlike the string, though, the lowest tone (equivalently, the eigenvalue closest to zero) is not sufficient to tell you the area. It’d be a good start, but it’s not enough.Instead, we have to look at how the eigenvalues are spread out.

For the string, the spectrum for length $\pi$ was $\{-1, -4, -9, \cdots\}$. For length $2\pi$, the spectrum was $\left\{-\dfrac{1}{4}, -1, -\dfrac{9}{4}, \cdots \right\}$.

The thing to notice is that, for the longer string, the spectrum is packed closer together. Another way to say this is that, for a given number $-R$, the spectrum for the string of length $2\pi$ has more eigenvalues closer to zero than $-R$ than the spectrum for the string of length $\pi$. Though the formula may not be obvious, there is a way to calculate the length of the string based on this way of interpreting the spectrum.For a drum, a similar idea works. Suppose we knew the spectrum for a drum completely. We then could count how many eigenvalues are closer to zero than $-R$ for any $-R$. We’ll call that number of eigenvalues $N(R)$. ($N$ for number of eigenvalues.)As $R$ grows, the number of eigenvalues closer to zero than $-R$ also grows. In fact, they grow in a very specific way.

Hermann Weyl (said “vile”) showed in 1911 that, for a drum head, $N(R)$ will grow roughly linearly in $R$. Not only that, the slope of that line predicts the area of the drum head!Precisely, the slope can be measured as $\displaystyle\lim_{R\to \infty} \dfrac{N(R)}{R}$. (This is saying something like, “See how many times bigger $N(R)$ is than $R$ for large $R$. If $N(R)$ were precisely a line, this would give the slope. It works just as well for $N(R)$ that are almost a line.)

Once you have that slope, it’s easy to find the area. In fact, $A = 4\pi\displaystyle\lim_{R\to \infty} \dfrac{N(R)}{R}$!Weyl’s law tells us that knowing the spectrum determines the area.

But the area is not the entire shape.

Besides the area, what other information can we figure out from the spectrum?

Weyl’s law says that $N(R) \approx \dfrac{A}{4\pi}R$. But we can be more precise than that. If we look at $N(R) - \dfrac{A}{4\pi}R$ (i.e., see how far off our approximation was), that new thing looks about like a multiple of the square root function $\sqrt{R}$! And, again, the multiple tells us something about our shape.

Weyl conjectured (and it was proven by Victor Ivrii in 1980) that $N(R) - \dfrac{A}{4\pi}R \approx \dfrac{P}{4\pi}\sqrt{R}$, where $P$ is the perimeter of the drum head. In other words, if we know the spectrum, a more careful analysis would give us the area and the perimeter of the drum.That would be enough to tell you the shape of a drum, if you knew that it was rectangular. You could hear the difference between a square (lots of area, not so much perimeter) and a stretched out rectangle (no so much area, lots of perimeter.)

This is hopeful. This $\sqrt{R}$ level approximation was good, but there’s still more information lurking about in the spectrum. Though no better approximation results have been proven (that I know of), you might hope that you can hear the shape of a drum.

Unfortunately, you can’t.In 1991, Gordon, Webb and Wolpert found some shapes that are obviously different, but have the same spectrum. There are lots of examples now, but here are two of them:Notice that they both have the same area and perimeter. They have to, thanks to Weyl’s and Ivrii’s work. But, despite being different, they have the exact same spectrum.

Of course, these drums heads are not exactly normal shapes for drum heads. Most drums that I’ve seen have had drum heads that were at least convex, meaning they don’t cut in.If you assume a few reasonable things about the shape of the drum head, it turns out that you can hear the shape of a drum. Our earlier example of being able to hear the shape of a rectangle is a strong example.

Within the last 10 years, Steve Zelditch proved a much better result. We assume the drum head is convex, has no holes, has very smooth boundary7, has at least one mirror symmetry, and a few other smaller technical assumptions.

If we assume the shape follows those rules, Zelditch proved you can hear the shape of a drum. If you know the spectrum, you can reconstruct what shape the drum had.That’s it for drums. Next time, we’ll start talking about the limits of proof in mathematics–Gödel’s incompleteness theorem! It may be a meandering path, passing through ideas about how math is done, what axioms are, famous paradoxes in math, and more.

Short Bibliography

1. This equation is often called the wave equation, since it controls the waves of the string.
2. Mathematically, a 100% chance. The probability of choosing a pure tone vibration, or even some finite combination of them, has a 0% chance of happening.
3. We are assuming that all strings are the same material, thickness, tautness, etc., so that the only question is what the length of the string is.
4. The lowest tone since the lowest eigenvalue $\lambda$ represents the lowest frequency vibration possible, and thus the lowest sounding tone. Though, it should be mentioned one more time that this is assuming all the strings we’re comparing have the same material, thickness, tautness, etc.
5. In higher dimensions, we just continue adding second derivatives in each of the directions.
6. Though there are very good algorithms for estimating both eigenfunctions and eigenvalues for any shape of drum (and any dimension) on a computer.
7. The boundary needs to be analytic, which means you need to be able to take its derivative infinitely many times, and, in addition, the Taylor series for the boundary has to converge. This is a strong condition. For instance, this means there can’t be any corners on the shape.