## Fractal dimensions

For most shapes, the dimension is pretty clear. A line is one dimensional — it has only length, no thickness or depth. A square is two dimensional — it takes up area, but not volume. A cube is three dimensional — a real one could be sitting on your desk.

The boundary of the Koch snowflake (partly shown above) should probably not be two dimensional since it’s just a line that’s been crinkled infinitely many times. But it also probably shouldn’t be considered one dimensional either — it’s been crinkled so many times, it’s infinite length and almost takes up area.

So, how many dimensions does it have?

Close, but you’ll spoil it!

There are many ways to define dimension, from topological dimension to the Hausdorff dimension. Different definitions are useful for different fields of mathematics, but they all agree for simple shapes.

The simplest definition of dimension is the one we used when we were talking about manifolds back in Asteroids on a Donut. Basically, a line is one dimensional because you can only go in one (pair of) directions — left and right. A square is two dimensional because you can go left and right and up and down. And so on.

Unfortunately, that stops working so well when you start dealing with fractals. After all, which directions can you go on the Cantor set?

The Cantor is set is just a bunch of disconnected points. There isn’t a direction to go when you’re on them. But there’s a whole bunch of points (uncountably1 many, in fact), so maybe it doesn’t make sense to say it’s zero dimensional.

To make sense of things like the Cantor set and the Koch snowflake, we need a definition of dimension that’s a bit more robust.

To figure out how to define dimension, let’s look how size grows when we double lengths.

For a line, if we double the lengths, the line doubles in length.

In other words, doubling the lengths makes the line $2 = 2^1$ times as big.

For a square, if we double the lengths, the square quadruples in area. That’s because there are two directions for doubling to affect, so doubling the lengths makes the square $4 = 2\cdot 2 = 2^2$ times as big.

For a cube, if we double the lengths, the cube octuples in volume. For a cube, there are three directions for doubling to affect, so doubling the lengths makes the cube $8 = 2\cdot 2\cdot 2 = 2^3$ times as big.

See where I’m going?

One way to figure out a dimension is to make it bigger by multiplying each direction by two. Just like multiplying the lengths of a cube by 2 increased its size by a factor of $2^3$, if we multiply the lengths of a $d$ dimensional shape by 2, then the size of the shape will have to expand by a factor of $2^d$.

Multiplying by 2 isn’t special, of course. If we multiplied each length by a factor of 3, the size of a shape of dimension $d$ would have to expand by a factor of $3^d$.

Now, let’s take this new measuring stick and try to measure some fractals!

Even Benoit Mandelbrot, often considered the father of fractals, had trouble nailing down an exact definition for what a fractal is. But a good intuitive idea is that a fractal is a shape that looks roughly the same, no matter how much you zoom in.

For example, if you zoom in on a circle, it quickly stops looking round, and begins to look straight.

But, for a fractal, no matter how far you zoom in, it keeps on repeating itself.

While “most” fractals aren’t exactly self similar (for instance, the Mandelbrot set), many of the simplest examples are. To make one, start from a simple shape, then repeatedly change it in the same way, on smaller and smaller scales, infinitely many times.

For instance, the Koch snowflake.

To make the snowflake, start with a triangle, then, add a spike to each side. Then, for each of the new, smaller sides, add a new spike. The Koch snowflake is not any of the intermediate steps, but the limit of doing infinitely many steps.

So, what’s the dimension of the snowflake?

Remember, what we’re looking for is a scaling law — if we multiply each length by a factor of 3, the size of a shape of dimension $d$ would have to expand by a factor of $3^d$.

How does the snowflake scale?

Consider one side at a time. Each side becomes four sides, of one third the length.

If you take one of these little mini-sides, and triple the lengths in each direction, then each one of these mini-sides will become as big as the entire original side. For instance, the little spike on the left flat part expands to be the same size as the big spike in the middle of the original.

Since the original side was four copies of the mini-side, that means that if we triple the length in each direction, then we quadrupled the total size of the shape.

Using the pattern that a $d$ dimensional shape should get $3^d$ times bigger, we have that $4 = 3^d$. Using a logarithm, we get that the dimension of the Koch snowflake is $d = \log(4)/\log(3) \approx 1.2619$. More than a length, a bit less than an area.

Let’s do another example of a fractal, this one somewhere between a point and a line — the Cantor set.

To make the Cantor set, you start with a line. Then you cut out the middle third. Then, from each remaining piece, you cut out it’s middle third, and so on, until you’re left with a fine dust.

To figure out the dimension of the Cantor set, look at the left segment after the first excision. Since we are just cutting out middle thirds, this little segment becomes an exact copy of the entire Cantor set, just at a smaller scale.

If we triple the lengths, then that small segment becomes the original Cantor set. But the original Cantor set is just two copies of that smaller piece. Thus, tripling lengths doubles size, so $2 = 3^d$, and $d = \log(2)/\log(3) \approx 0.6309$. Not really a line, not really a point.

With any of these self similar fractals, you can do a similar trick, without too much problem. For instance, the Serpinski Carpet, which is made by taking a square and repeatedly cutting out the middle ninth, increases in size by a factor of 8 whenever the lengths are multiplied by 3, and so has a dimension of $8 = 3^d$, so $d = \log(8)/\log(3) \approx 1.8928$.

Many of the most awesome fractals aren’t exactly self similar, like the Mandelbrot set.

The Mandelbrot set is defined by looking at each complex number $c$ individually, then repeatedly calculating $z_i^2 + c$. (For example, if $c = 1 + i$, then we make a sequence $z_0 = 1+i$, $z_1 = (1+i)^2 + 1+i = 1+3i$, $z_2 = (1+3i)^2 + 1+i = -7+7i$, etc..) If this sequence becomes infinitely big, then the original $c$ is not in the Mandelbrot set. If, like for $c = 0 + 0i$, this sequence stays close to zero, then that $c$ is in the Mandelbrot set.

The Mandelbrot set is famous for it’s beauty, and for the nearly repeating, but infinitely varied patterns you find when you zoom in.2

Actually, from that definition, you might realize that only the black parts of the pictures are the Mandelbrot set. The pretty colors come from counting how long it takes for the sequence $z_i$ to get large enough to be sure it won’t stay small.

Well, the Mandelbrot set itself is, of course, 2-dimensional, since it has area. (It can’t be three dimensional since it’s already confined to a plane.) But what about its boundary?

It’s a really zig-zagging line, like the Koch snowflake, but it’s not exactly self-repeating, so we can’t do the same tricks we did before. Just from the previous examples, you probably expect the dimension to be somewhere between 1 and 2, which seems reasonable.

Mandelbrot himself conjectured that the boundary was so zig-zaggy, so fractal, that it would somehow skip pass crazy dimensions like $\log(4)/\log(3)$, and go all the way to two dimensions.

This turns out to be true. Shishikura managed to prove in 1998 that the boundary of the Mandelbrot set is two-dimensional. The proof is a biiiit complicated, so we won’t go into it here, but it does work.

Normally, the boundary is one dimension lower than the main part of a shape. For instance, a square is two dimensional, but its boundary is one dimensional. Fractals can be a bit different. For instance, we had the Koch snowflake. The inside is two dimensional, but the boundary, as we said earlier, is $\log(4)/\log(3)$ dimensional. Still less than two.

The Mandelbrot set itself is also two-dimensional. But, somehow, the boundary is so jagged that it manages to have the same dimension as the set itself.

That’s… bizarre.

But that’s how fractals roll.

Sorry for taking so long on this post. I wrote an entire other post about waves, which took forever… but then I decided it wasn’t very good. Hopefully I’ll get a better angle on that topic eventually.

<– Previous Post: The most controversial axiom of all time
–> Next Post:

1. Uncountable means infinite, but like super duper infinite. Countably infinite means you can line them up with the counting numbers 1, 2, 3… Uncountable is somehow a bigger kind of infinity. Way more than infinity plus one. Ahem. Anyway, if you’re curious how you can be bigger than infinity, go back to the very first post, Infinity plus one.
2. A Mandelbrot set generator isn’t that hard to make yourself, if you know how to program, or there are a number of generators available for free online, such as this one

## The most controversial axiom of all time

If you believe Banach and Tarski, you can take a sphere, cut it into a handful of pieces, move them around, and put them back together into two complete spheres of the same size. 1

The accountants and engineers may be a bit angry about magically doubling a sphere…

but the proof that you can double a sphere does almost nothing questionable.

In fact, the most questionable thing we have to do is… to choose.

Yup! It turns out that making choices is more controversial than it seems it should be.2 In fact, the Axiom of Choice is perhaps the most discussed and most controversial axiom in all of mathematics.3

To convince you that choosing is hard, let’s look at simple example, picking a number between 0 and 1. Go ahead, pick one!

Like the girl in the red dress, you probably picked a rational number, i.e. a fraction. There’s nothing wrong with that, but, remember, that there really aren’t that many rational numbers.4 So, let’s try to pick a random irrational number between 0 and 1.

There are lots of choices possible, like $\pi-3$ or $\sqrt{2}/3$ or $\ln(e-2)+1$, but that’s not really a random irrational number. They’re all very special ones that we can write down using a fancy formula, rather than a completely random choice.

So, how could we choose a random number?

Recall that an irrational number can be thought of as a infinite decimal, that neither repeats nor ends. So, to pick an irrational number at random, we could just pick digits randomly, one at a time.

Great! Now, you’ve picked a truly random irrational number!

But tell me, what number did you pick?

See the problem?

Choosing one digit, or even a million, is (in theory) not very hard. There are digits, you pick one. No problem.

But if you have to make an infinite number of choices… Well, it’s easy to say that you should make infinitely many choices, but can you really do it? If you can’t tell me the number you picked, did you really pick a number?

That is the controversy about the axiom of choice.

So, what does the axiom of choice actually say?

The axiom of choice says that, for any collection of (nonempty) sets, you can choose one thing out of each set.

For instance, if we were picking an infinite decimal, like before, our collection of sets would be a bunch of copies of the set of digits 0 to 9, one set of them for each of the infinitely many digits we need to pick. The axiom of choice says that we can pick one digit from each set of digits in order to pick an infinite decimal number. It doesn’t say how to pick those digits, or what digits you pick, just that you can pick them, somehow.

(To be clear, the axiom of choice doesn’t talk about making random choices, just a choice at all. So, in the exact case of picking digits that we just used, the axiom of choice simply says that there is some infinite decimal we can pick, not that it’s a random one. It’s perfectly valid for the axiom of choice to choose, say, all zero’s, and end up with the number 0.)

So why is this axiom so controversial?

The first is that you can’t actually get your hands on the object(s) the axiom of choice chose.

Axioms usually represent a basic definition, or a base truth, or something that is “obviously” true. For instance, one of the other basic axioms (of set theory) is that no matter which (counting) number you pick, there’s always a bigger one.5 That seems pretty obvious.

But, with the axiom of choice… Well, just like you couldn’t tell me which number you picked by picking each digit randomly, the axiom of choice simply says you can make a choice, not which one to make, or what the choice is.

If you can’t tell me what number you picked, did you pick it?

How is it “obvious” that you can make such a choice?

This is the argument of the constructivists. In their view, everything needs to be explicit. A choice only makes sense if you can tell me what you picked, or, at least, a way to make a unique choice. The axiom of choice fails this standard, and so should be avoided.

The other objection is that the axiom of choice leads to a number of “obviously false” results.

The most famous of these, we’ve already talked about, the Banach-Tarski paradox. In short, it says that you can take a sphere, cut it into a few pieces, move them around, and rearrange them into two spheres of the exact same size as the original! A bit of black magic, indeed.

The problem is that the axiom of choice is also instrumental in proving key, foundational, “obvious” results as well!

For instance, nothing is more obvious than if you have two bags of rice, one has more grains of rice than the other, or maybe the same amount of rice.

But without the axiom of choice, you can’t say the same thing about sets!

For finite sets, of course, this is not a problem. A set with 42 things in it is bigger than one with 27 things. But for infinite sets, it’s not always clear how to compare them.

Like we talked about way back in The size of infinity, the way to compare sets is line up the things inside them with each other. If we had two sets, say A and B, and each thing in A had a corresponding thing in B, then clearly B is at least as big as A.

The problem is that you can come up with complicated sets A and B where it’s not obvious how to line up things in A with things in B. In fact, without the axiom of choice, you can show it’s sometimes impossible to compare the size of the two sets. And it’s not even that you just don’t know which is bigger. It’s worse than that. The sets both have sizes, but you can’t even compare their sizes.

It turns out that the axiom of choice is equivalent to saying that you can always compare sizes of sets. In other words, either you accept the axiom of choice, or else you can’t always compare sizes. You can’t get one without the either.

There are a lot of other theorems that are equivalent to the axiom of choice. There’s a whole section of the Wikipedia page listing some of the equivalent results, some more intuitive, some less.

To quote Jerry Bona, “The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn’s Lemma?” The joke is that all of them are actually equivalent.

So, all of this leads to two very important questions.

First, you can’t “disprove” an axiom, since they’re base assumptions. But, can you prove that the axiom of choice is not consistent?

A consistent set of axioms is a set of assumptions that can’t prove contradictions. For instance, if you could use your axioms to prove that 0=1, that would mean your axioms were not consistent.

If you could show the axiom of choice caused inconsistencies, all the accountants in the world would feel more relieved, since then we could throw out the axiom of choice, along with its impossible consequences, like the Banach-Tarski paradox.

However, Gödel again comes to the rescue. In 1940, Gödel showed that axiom of choice does not itself cause any inconsistencies.6

Okay, so we can’t throw out the axiom of choice because of inconsistencies, no matter how much the Bananch-Tarski paradox assaults our sensibilities.

But maybe we can do the opposite. The second question about the axiom of choice is whether we can prove it true using only the other axioms. In other words, do we need to assume the axiom of choice at all, or do we get it for free?

Here, again, we get an interesting answer. In 1963, Cohen showed that it’s impossible to prove the axiom of choice from the other standard axioms.

So, where does that leave us, intrepid explorers of mathematics?

As a (non-obvious) consequence, Cohen’s proof means we are free to either assume that the axiom of choice is true, or, in fact, that the axiom of choice is false!  Either way is fine for math.

How do mathematicians deal with the controversy?

Originally, mathematicians were resistant to the axiom of choice. One well known story is about Tarski (of Banach-Tarski fame). He used the axiom of choice to prove a result about the sizes of infinite sets.7 He submitted the paper to a journal. In response, two editors rejected his paper.

Their argument? Well, Fréchet wrote that using one well-known truth to prove another well-known truth is not a new result. Meanwhile, Lebesgue wrote that using one false statement to prove another is of no interest.8

Nowadays, however, most mathematicians accept the axiom of choice without too many reservations. It’s simply too useful in proving too many foundational results in many fields. It’s consistent, so there doesn’t seem to be any reason to not use it, despite the occasional paradox it causes.9

<– Previous Post: Double for Nothing, part 2
First post in this series: How Long is Infinity?

Thanks for sticking with me! Those of you who came from the recent video by 3Blue1Brown may not have realized, but I haven’t posted recently. I’d planned on being a professor since high school, but a few months ago, I decided I was going to change careers. Learning as much computer science as I could and searching for a job and moving and so on took a lot of time and mental effort, which lead to not many (any?) blog posts.

However, we’ve now settled down in Albuquerque, NM, where I just started a job as a software developer for a small company making scientific software. New posts should now continue to come out about once a month. Yay! More awesome math!

(Also, if you haven’t checked out 3Blue1Brown before, you totally should. He’s pretty awesome too.)

1. We started talking about this result in Double for Nothing: the Banach-Tarski Paradox, though we started talking about the basic ideas back in How Long is Infinity?
2. Mathematical choice, like we’re talking about, is separate from the question of free will in philosophy. Though, on that front, I personally think it’s foolish to believe in anything other than free will. If we don’t have free will, it doesn’t matter what you believe, because either way, your actions are determined. If we do have free will, it’s clearly important to believe you have it, so that you can make better choices, and have the ability to change. So, in either case, believing you have free will is the correct choice.
3. The only competitor is the parallel postulate of Euclid
4. In How Long is Infinity? we showed that, in fact, 0% of the numbers between 0 and 1 are rational, at least talking in terms of length. In the much earlier The size of infinity and A bigger infinity, we talked about one of the most mind-blowing results in mathematics, that the infinite size of all numbers between 0 and 1 is a larger infinity than the infinite size of all the rational numbers.
5. I’m referring to the axiom of infinity in Zermelo-Fraenkel set theory, the standard set theory I’m basing everything off of. The statement is more technical and complicated, but it’s there to establish there are infinitely many things.
6. This doesn’t contradict Gödel’s incompleteness theorem, which says that you cannot use a set of axioms to prove their own consistency. This result says that assuming the standard axioms of set theory are consistent, then adding the axiom of choice doesn’t add any inconsistencies. It’s still possible that the standard axioms are inconsistent themselves.
7. The theorem said that any infinite set $X$ has the same size (cardinality) as the “two-dimensional” version, the Cartesian product $X \times X$. For example, the line $\mathbb{R}$ has the same cardinality as the plane $\mathbb{R} \times \mathbb{R}$
8. This story was recounted by Jan Mycielski in Notices of the AMS vol. 53 no. 2 page 209
9. Those that reject the axiom of choice, usually do it on philosophical grounds. Again, the axiom of choice simply says you can make a choice, not what that choice is. And if you can’t get your hands on it, did you really make a choice? Constructivists reject anything you cannot explicitly construct. Of course, they also reject the “Law of excluded middle,” which says that every statement is either true, or its negation is true, on which half of logical thought is built.

## Double for Nothing, part 2

I know it’s been a while1 since the last post, but we’re not quite dead yet…

I’ll explain why it’s been so long at the very end of the post, but in the meantime, we’ve got some math to explore!

The Banach-Tarski paradox says that you can take a ball, cut it up into a handful of pieces, then rearrange them in order to get two balls identical to the original.

Impossible, right?

Wrong!2

It certainly seems impossible, though. After all, if all you do is cut up the ball, and move the pieces around (no stretching required!) then the volume of stuff from the ball shouldn’t change.

But you can duplicate a ball! The trick is that rotations can create points, seemingly out of nowhere.

Here’s a simple example: Take a circle, with a single point missing.

How can we fill back in that hole? The obvious thing to do is just infinitesimally stretch the circle to fill in that single-point gap.

But let’s rule out stretching — we know that stretching things mathematically can create points. Can we do it with just a rotation?

If we pick just the right set of points to rotate, we can!

Let’s start by taking the point one radian3 clockwise of the original point. If we rotate it counterclockwise around the circle, it’ll fill the gap we had.

So, of course, that point should be in the set we will rotate. Of course, moving that point leaves another gap, so we need to also rotate the point one radian clockwise of that. And then we need another point to fill in that gap…

…and so on, and so forth.

The trick is that we picked a special angle. Recall that a circle has 360 degrees, or, equivalently, $2\pi$ radians. If we keep picking points one radian clockwise of the original gap, we go around the circle once, then twice, then more, but we will never end up back where we started. That is because $2\pi$ is an irrational number! (Details in this footnote:4)

The original hole is filled by the point that was 1 radian away. The hole left at 1 radian away is filled by the point two radians away. That hole at 2 radians is filled by the point at 3 radians… and so on, so a billion radians later, the hole left by the point a billion radians away is filled in by the point a billion and one radians away.

Sure, by this point, we’ve wrapped around the circle, oooohh, say, 160 million times, but we have never repeated a point! All thanks to $2\pi$ being irrational.

So, why isn’t there a hole at the end of all this? Well, there is no end. We’re kind of pulling a point out of infinity to fill the gap.5

Of course, creating a single point is not so impressive. So, let’s get back to the Banach-Tarski paradox.

As we talked about in the last post, the key trick is not really about geometry at all. I’m going to review some of what we discussed last time, but if you haven’t read or don’t remember Double for Nothing: the Banach-Tarski Paradox, you probably should before finishing this post.

If we take a ball, we can rotate it in different directions, forward (F), backward (B), right (R), and left (L). And, we could do multiple rotations in a row, for example, FRB would be backwards rotation, right rotation, then forward rotation.6

We can put all of these “words” representing rotations into a graph, where, for each letter in the word, F means you go up, L left, etc.. The center point, which represents no rotations, we can label N.

Thus, a series of rotations is represented by a word which is represented by a point on this branched graph. Of course, we can do any length of words (i.e., any number of rotations), so we get an infinite graph.

The “words” starting with L represent the rotations ending with a left rotation, and are the ones on the left side of the graph. (Again, words are series of rotations are points on this graph.)

The key observation from last time was that if we take the “words” starting with L, then undo the last rotation by rotating right, we end up with all the words except the ones on the right!

Where do all these extra points come from? Well, like the circle example from earlier, in some sense we’re “pulling them from infinity,” i.e., pulling them out of those infinitely small branches down in the graph.

The key of the Banach-Tarski paradox is figuring out how to get this “creation” of points on the graph to work on a ball instead.

To do that, we need to associate points in the ball with this graph somehow. Fortunately, the basic idea is not too hard. The points on the graph are supposed to represent words which represent series of rotations of a ball. Thus, we’ll try to associate each point on the graph, i.e., a word, with points on the sphere that we find via those rotations.

Grab the ball. The word that represented no rotations, N, we’ll associate with the “north pole” of the ball. The north pole, though, is just a point on the surface, and we want to duplicate the entire ball. So, let N actually represent all of the points below the north pole through the inside of the ball, all the way down to the core. (Though not including the center point at the core itself.) Thus, N represents a little line segment.

For every other series of rotations, after you rotate, the word (i.e., point on the graph) representing those rotations will represent the line from the new north pole to the center of the ball. For example, for the rotation L, you would rotate left, and that new north pole and the points under it are now “L.”

To make this all work, it’s very important that two different words, i.e. series of rotations, don’t represent the same set of points. To guarantee that, we need to pick the angle we rotate carefully. Fortunately, like in the circle rotation example earlier, it’s not too hard. One traditional angle is $\arccos(1/3)$, but there are infinitely many angles that would work.7 If we pick that angle, each word, or series of rotations, will rotate a new point to the north pole.

Great! We’ve taken the ball and identified its points with the words, which are points on the branched graph. And those points are spread out all over the ball — it turns out you can get points spread out evenly all over the ball with an arbitrary number of rotations.

Except…

Well, we’ve actually missed almost all the points of the ball!

Even though our words represent points that are evenly spread out all of the ball, so that it would look like we’ve covered everything, we’re actually missing “most” of the points in the ball!8

Fortunately, there’s an easy way to fix this.

Simply pick one of the points we missed on our first go, and start again with that point as the new north pole. We can associate the word N (for no rotations) with both this north pole and the original one we picked. Then, we can do all the rotations, like before, and associate their words with the new points as well. After doing this, we’ll have two line segments of points from the surface to the middle of the ball for each “word” like N or FR or BBBR, but we’ll lump them together into one set of points.

Unfortunately, we’re still missing most of the points in the ball. So, we pick yet another new north pole from the leftover points, and do it again. And again, and again. In fact, we have to do it infinitely many times.9

But, in any case, we’ve split up the ball into a bunch of pieces. For instance, N is associated with all the infinitely many “north poles” we picked, along with the line segment underneath them, and a similar set of line segments for every other word, or set of rotations.

Now we can use the trick from the last post.

Let’s call the set $S(L)$ the set of all the points and line segments in the ball represented by words that start with L, i.e., where the last rotation was to the left. We can similarly define $S(R)$, $S(F)$, and $S(B)$. The set $N$ will just represent all the points associated with the north poles.

The set $S(L)$ is all the points found by rotating the sphere, where the last rotation was to the left. So, if we take all those points, and rotate them to the right to undo that last rotation, as with the graph, we get all the points in $S(L)$, $S(F)$, $S(B)$, and $N$, exactly like we did for the branched graph! As before, we call the left-last points, rotated right, $R\,S(L)$.

We can do a similar thing, and $F\,S(B)$, the points found by rotating backwards last, but then having that last rotation undone, is all the points in $S(B)$, $S(R)$, $S(L)$, and $N$ put together!

That, right there, is the heart of the Banach-Tarski paradox.

It’s easy to get the two balls from what we’ve done. To make the first ball, take all the points in $S(L)$ and $S(R)$, rotate $S(L)$ to the right to get $R\,S(L)$, then put $R\,S(L)$ and $S(R)$ together, and you have a ball! The second ball is similar: take all the points in $S(B)$ and $S(F)$, rotate $S(B)$ forward to get $F\,S(B)$, then put those two sets together, and you have the second ball!

So, there you have it. By cutting up the sphere into a few pieces, and then just rotating and moving them around, you can turn one ball into two!

Admittedly, we’ve glossed over a few important details if you want this to work out perfectly, but I think they can be hidden in a footnote.4

This is quite the paradox! You shouldn’t be able to cut a ball into pieces and put them back into two spheres!

From a physical point of view, this process is, of course, impossible. Not only are the sets we’re cutting the ball into hopelessly complicated and delicate, but they assume that matter is infinitely divisible, which is false. (Subatomic particles are, after all, a particular size, and it’s hard to cut a quark into pieces…)

But even from a mathematical view point, this seems like it shouldn’t work. And, so, if we think that way, we can look back at our assumptions, and see which of the axioms we used seems the most questionable, and try to get rid of that assumption.

What’s fascinating about this proof is that the key problematic axiom is so innocuous that, if you didn’t know what you were looking for, you would probably never find it. The step that is the most questionable is the one where we choose points as new north poles.

The thing is, when we make that choice, there’s no reason to pick one point over another. They’re all just as good as any other. Plus, we have to make infinitely many of these choices, which is also a bit… uncomfortable.

Doing this requires the Axiom of Choice, perhaps the most infamous axiom in mathematics.

And all it says is that you can choose things.

In the next post, we’ll take a look at the axiom of choice and why it’s so important… and infamous. (Assuming the Missus can find the time to draw…)

An excellent video on this paradox, and its proof, can be found on Vsauce’s Youtube channel. In the description, he also lists many resources which I found useful in preparing this post.

Oh, life plans. How fleeting they are.

Since the last post, we’ve had Thanksgiving, a funeral with associated trip, finals, the flu, bad colds, Christmas, the start of a new term, 10 interviews, and so forth. It’s been… busy. That would explain some of it. But another big thing is the complete upheaval of my life plans!

See, to become a professor, after you get a PhD, you usually spend 2 or 3 years at one or two universities as a “postdoc,” which is what I am now. These positions are temporary, and are not expected to lead to permanent positions at said universities.

So, I applied to permanent jobs this school year. Lots of them. I had a bunch of interviews, but I didn’t end up getting hired as a professor.

I could probably scramble and get another postdoc at another university and then do another cycle or two of applications for professor jobs, but… well, academia is stressful. Awesome, to be sure, but stressful too. (When I talked to my mentor about his career path, his frequent use of the phrase “panic mode” didn’t exactly encourage me.)

So, after a lot of thought, I’ve decided to leave academia, and become a computer programmer instead. Of course, since I have limited experience, that means I have till my contract at the university ends to learn enough programming to get hired somewhere. It turns out that covering years of computer science education on my own is time-consuming.

For obvious reasons, then, this blog, though it will probably continue, is going to be updated less frequently in the coming year.

<– Previous Post: Double for Nothing: the Banach-Tarski Paradox
First post in this series: How Long is Infinity?
–> Next Post: The most controversial axiom of all time

1. The Missus, here: it was my fault. Apparently one cannot blog without cute pictures from wife.
2. Well, okay, in real life, it’s not like you’re going to be able to duplicate balls of gold with this in a get-rich-scheme or anything, but mathematically it works!
3. In case you’ve forgotten, radians are just another unit for measuring angles, like Fahrenheit and Celsius are different units for measuring temperature. By definition, a circle has 360 degrees, or $2\pi$ radians. A triangle’s angles (on a flat surface) add up to 180 degrees, or $\pi$ radians.
4. Well, even here, I won’t go over the details of the proof, but I’ll at least mention some things we glossed over. The most obvious is, perhaps, that we never worried about the very center points of the sphere. You can take care of that with a trick like we did by rotating the circle to fill in a point. Also, you may not have noticed, but we only used $S(L)$, $S(R)$, $S(B)$, and $S(F)$ in making the two copies. We never used the original copy of $N$! So, actually, we ended up with 2 spheres, plus some left over stuff. If you cut up the sphere in a slightly more complicated way (it’s not that bad), you can make sure you don’t end up with this junk. The most subtle problem is that not all points of the ball rotate when we do our rotations. This ends up meaning that we’ve missed some points. But there’s a way to take care of these points as well. More details can be found on the Wikpedia page, or in this short paper
5. This boils down to a variation of the Hilbert’s Hotel paradox, which we talked about way back in our very first post, Infinity plus one
6. Recall that we don’t allow letter combinations that would undo each other, like LR or BF.
7. It takes a bit more work than the circle example to be one hundred percent sure that points don’t overlap after any rotation, but this angle was chosen since it makes the math not too horrible. A proof that it doesn’t overlap can be found on the second and third page of this paper
8. The fundamental problem is that our rotations only let us get countably many points on the surface of the ball, while the surface itself has uncountably many points. As we talked about in How Long is Infinity?, countably many is essentially nothing compared to uncountably many.
9. More specifically, we have to do it uncountably infinite times in order to get them all.