Wednesday, February 24, 2016

When the Moon Hits Your Eye, There's Some Math Using Pi

It was warm and clear this past weekend, so I did some late night observation with my new binoculars. Weather is the bane of astronomers, unless you only work with space telescopes or you do neutrino observation or now even gravitational wave detection. Actually, I'm not so sure about that last one. I imagine a light drizzle could easily be mistaken for colliding black holes.

You're welcome, Celestron.
Anyway, it struck me while I was observing that the moon is very bright. Whenever I found it in my binoculars, I flinched momentarily before I adjusted to the stark change between black sky and white moon. And indeed, at night, the moon is by far the brightest thing in the sky (except for inconveniently placed streetlamps).

But it turns out the moon is pretty dim, too, when considered from another perspective (no, not the dark side). So why does the moon shine in the first place? While it does have a temperature, the vast majority of its thermal radiation is not in the visible range. Instead, of course, the moon borrows its light from the sun, reflecting it back toward us.

Naively, then, you might expect the moon would be roughly the same brightness as the sun. And when you look at a full moon hovering imperiously in the night, washing out all the stars in the sky, it does seem darn bright. However, our eyes (and the rest of our senses) are pretty terrible at discerning objective levels of radiant power. The moon is bright only relative to the sky and the stars. In astronomical terms, the sun is much, much more luminous than the moon.

Measured with fancy equipment, the apparent magnitude of the sun is about -27, while the apparent magnitude of the moon is roughly -13. If you remember from my nerdrage over Star Wars, larger magnitudes are dimmer, the visible stars are around magnitudes 1-6, and the scale is not linear. From this we can tell that the sun is way brighter than the moon, the moon is way brighter than the stars, and astronomers use a needlessly cumbersome system for quantifying brightness.

If you do the math, 1014/2.5, a magnitude difference of 14 is about a factor of 400,000 in brightness. Yes, objectively, the sun is 400,000 times brighter than the moon (as seen from Earth). So when the moon shines its paltry reflected sunlight back at you, what happens to the other 99.99975% of the light? How do we go from a sun’s worth of light to one moon unit (a Zappa)?

It's at this point you may recall that different objects reflect and absorb different amounts of light. That's why color exists, after all. You can also measure an overall amount of reflectivity, which gets called albedo. The bond albedo of an object is just the percentage of light that is reflected rather than absorbed. Freshly fallen snow has an albedo as high as 0.9, whereas asphalt can be as low as 0.04. The moon's average albedo is 0.12, which means 88% of the sun's light is absorbed. But 88% is not 99.99975%. From albedo considerations alone, the moon is still too bright by a factor of 48,000. How does the moon get rid of the rest of its sunlight?

The problem is that we're thinking of the moon as a giant, flat mirror directly reflecting the sun's light toward us. But the moon is not a mirror. You can tell this because it doesn't look like the sun. A mirror exhibits specular reflection, which means incoming light bounces off cleanly at a particular angle. If it comes in 30° one way, it bounces off 30° the other way. And since all the light bounces in the same way, mirrors reproduce an image of what’s reflecting off of them.

Ignore everything about this picture that is ridiculous.
Non-mirrors (everything else) reflect light diffusely, which from the name alone suggests the process is not so orderly. On the moon, the properties of the rough, irregular regolith on the surface determine how light is reflected, but the gist is that it’s very strongly dependent on the phase angle. In fact, the moon has an opposition effect, which does tend to bounce light directly back when light is coming from behind us, i.e. when the moon is full. Even still, the picture above doesn't hold.

I admit I struggled with this problem for a bit before finding a suitable answer. Here's what I did to solve it. How do you account for a factor like 48,000? Well, let's compare some relevant numbers. The moon is 384,400 km away from us on average. Its radius is 1,737 km. The Earth's radius is 6,400 km. The distance from the sun to us is 150,000,000 km. Hmm, I can’t think of anything else that might be important.

The distance from the sun can't matter, because we're dealing with the apparent brightness of the sun, which is how bright it looks to us from here on Earth. Distance already factors into the 400,000 figure. The Earth's radius can't matter, because we're talking about how bright the moon is to our eyes. If the Earth were the size of a pin (and we were still the same distance from the moon), it wouldn't affect the light that hits our eyes. So the only two numbers that can matter are the moon's radius and its distance from us.

Well, what's 384,400/1,737? 221. 221 doesn't look very good, but if we square it, we get about 49,000. That's very close, within a few percent, of our factor of 48,000 (which is a heavily rounded figure). Okay, but why does squaring matter?*

In the illustration above, we're thinking that the moon intercepts the sun's light and shines this perfect sun laser back at us. If that's the case, then we are hit by a circle of light with the area of the moon's disc. The area of a circle is πr2. (I told you π was involved.) If the above relation is valid, then we are really being hit by a circle of light with the radius of the distance between the Earth and the moon. How could that be? Imagine that instead of the moonlight bouncing straight back at us, it spreads out in a cone, with the angle between the edge of the cone and the line connecting the Earth and the moon being 45°.

Jobs I won't get upon completion of my degree include: NASA artist
In that case, the cone forms a right triangle, with half that base being equal to its height, the Earth-Moon distance. And if you turn the cone toward us, you see that the base is a circle. So instead of the light being concentrated into a disc the size of the moon, it's spread out into a disc with a radius of the lunar distance, which dilutes the light by a factor of 48,000 or so, because the Moon is much farther away from us than it is big.

Why would the light reflect back that way? It probably doesn't, exactly. The process by which the moon reflects light is complicated and is modeled with something called a bidirectional reflectance distribution function. But the opposition effect means a full moon tends to reflect light directly back, so everything coming back at an angle of 45° or less seems reasonable. But we're ignoring for a moment that the moon is not a point source, so that right circular cone probably looks different at other latitudes. On average, though, it works out to produce the above picture.

Anyway, that's probably enough MS Paint illustration from me for one blog post. Also, this is a reasonable length, so I better stop now before things get out of hand.

*Update: My solution to the problem posed in this post is almost certainly wrong. I believe I was right about the square relation between the moon's radius and distance from us, but wrong about why that relationship is important. That's the tricky thing about proportionality arguments: without constants, you can fool yourself about what you're talking about. Anyway, I think I've figured out the real answer.

So one of the issues that bothered me about my solution is that it relies on the moon being this weird, hard to study surface, but gives you an answer with a simple and neat geometric interpretation. That seemed unlikely, but the math worked so I accepted the answer anyway. But it turns out that the moon's surface is both harder and easier to analyze than I realized. Before I get to that, however, there's another important issue.

When I first considered this problem, I assumed the answer was that the inverse square law causes the light reflected from the moon to diminish so that it is less luminous than the sun. But after some thought, that didn't seem plausible. You see, when the sun's light travels to us, it loses some intensity because of the inverse square law, just like gravity gets weaker with distance.

For the moon's light, however, that light goes the extra distance from the Earth to the moon and back again (for a full moon). But the distance to the sun is 150,000,000 km, and the distance to the moon is 384,400, which means the additional distance traveled is only .5% more, which is only going to lose you .25% of your intensity from the inverse square law, and not the factor of 48,000 we needed. So I figured that couldn't be the answer.

What I was failing to consider, however, was that light reflecting off of the moon changes the applicability of the inverse square law. The inverse square law isn't mysterious. Rather, it's a consequence of geometry in a 3-dimensional world. If an object emits light radially from a point source, then at any given distance from the source, the light will be spread out on a spherical shell around the source. As the distance grows, the light falls off with the square of the distance, because the surface area of a sphere is 4πr2.

But any real emitter is not actually a point source. The sun radiates the light we see from its surface, which is (almost perfectly) spherical. All this means, however, is that there is some defined power at the surface, and we can imagine that power increasing to infinity as we dip below that surface to a point. But here's the key: the power radiated per unit area has some value at 1 radii out (the surface), and that power drops to 1/4 its original value at 2 radii, 1/9 its original value at 3 radii, and so on. Note that this exactly mirrors (ha) my original answer. At 221 moon radii (384,400/1737), the power has been reduced by a factor of 2212=49,000.

This answer being applicable, however, requires that light reflected from the moon is emitted radially (from the half that is facing the sun, anyway), which seemed implausible to me in the beginning given how complicated the moon's regolith is supposed to be. But it turns out that if you assume the moon is an ideal diffuse reflecting surface, then radial emission is what happens.

For a specular reflecting surface, the incident angle of the light exactly determines the angle of reflection. But for an ideal diffuse surface, the incident angle is not important at all, and the light reflects in a random direction. If the light reflects entirely randomly, then on average the angle of reflection will be exactly perpendicular to the surface, because any angle away from perpendicular will be balanced out. So on average, a diffuse reflector looks like a radial emitter and follows the inverse square law.

The complicated surface of the moon, with its opposition effect, means that the "on average" part up there is not strictly speaking true, but it apparently doesn't have enough of an effect to eliminate the approximately true inverse square relation that shows up. Why radially emitting from the moon seems to drop off more quickly than radially emitting from the Sun is because a radial emitter that has the Sun's apparent brightness at 1 lunar radii is actually a weaker source than the same apparent brightness at 1 solar radii. If you expand the lunar emitter to the size of the solar emitter, then your power/area is reduced accordingly and you have a dimmer surface, so of course its power will fall off more quickly than the solar emitter.

Well, anyway, so much for this post being a reasonable length.

Sunday, February 14, 2016

The Equivalence Post

About twenty years ago--maybe right around the time LIGO was finally getting funding, when the gravitational waves it just detected were still a couple dozen star systems away--my elementary school class did a living wax museum. We researched a historical figure, dressed up as our subject, and, when a "visitor" to the museum pressed a red dot on our hand, recited a first-person speech based on our research. Unrepentant early nerd that I was, I chose Albert Einstein.

I don't really remember anything about the contents of my monologue. I probably gave a brief biographical sketch, but likely left out the part where Einstein bribed his first wife into divorce with Nobel money he'd yet to receive. I probably talked about the theory of relativity and how it merged space and time, but likely didn't include anything about Riemannian geometry and metric tensors.

My knowledge of the scientist and his science was patchy, to be sure, but that didn't stop me from admiring him. Einstein is the model of the lone genius working tirelessly, using nothing more than the power of his mind to change the world. For a long time, I imagined he and I were equivalent. I imagined that I alone knew the secrets of the universe and that my solitude represented nothing more than the gap in intellect between myself and others.

Before the inevitable deconstruction of that paragraph, let's talk a bit about Einstein the genius. While E=mc2 is his most famous equation, it's not the equation that made him famous. Physicists will tell you that general relativity was his crowning achievement.

GR grew out of Einstein's attempt to extend his special theory of relativity to gravity. SR and electromagnetism fit together perfectly, but gravity did not behave. According to Newton, gravity acts instantaneously, and that didn't sit well with light speed being the ultimate limit. To reconcile gravity with relativity, Einstein looked at a subtle difference between the electrostatic force and the force of gravity.

When two charged particles are sitting next to each other, the electrostatic force that one feels is proportional to the product of their charges divided by the square of the distance between them--simple enough. When two masses are sitting next to each other, the gravitational force on one is proportional to the product of their masses divided by the square of the distance between them. The forces are nearly identical, just swapping charge for mass.

But when a particle feels a force, it follows Newton's second law and accelerates by an amount inversely proportional to its mass, which is what inertia is all about. This means the mass term from gravity and the mass term from inertia cancel out and bodies under the force of gravity experience the same acceleration regardless of their masses. We know this; it's just the idea that a hammer and a feather (ignoring air resistance) fall at the same rate.

Thank you, NASA.
This quirk of gravity gets called the equivalence principle, because it seems to show that "gravitating" mass and "inertial" mass are equivalent, even though there's no particular reason why they need to be.

As Einstein thought about this peculiarity of gravity, he was struck with what he called "the happiest thought" of his life. He postulated a modification to the equivalence principle, which is that being in a gravitational field is equivalent to be in an accelerated reference frame. What he meant was that gravity is not a real force but an effect we observe, so there's no difference between your car seat pushing up against you when you hit the gas and the Earth holding you down.

The link to the other equivalence principle is that, in free fall, any object falling with you moves at the same rate, and the same thing is true in an accelerated reference frame, because the acceleration you feel is a result of the frame (your car, a rocket) and not your mass.

This happiest thought led Einstein to the conclusion that being in free fall in a gravitational field is just as "natural" as being at rest. When you do feel a force (your car seat, the ground), that's just an object getting in the way of your natural path through spacetime. As usual for Einstein, his next step was to imagine what this meant for light.

Assuming his principle is true, weird things happen in gravity. Say you're in a rocket ship at rest in space. If a beam of light comes in one window, it will trace a straight line through the rocket ship and out another window. If you're moving at a constant speed, you observe the exact same thing, because special relativity says you can't tell the difference between different inertial frames.

If you're accelerating, the light will trace out a parabolic curve, because you're moving faster when the light leaves the rocket than when the light enters it. The equivalence principle says you can't tell the difference between gravity and acceleration, so the same thing should happen if you're in a gravitational field. Light passing near the Sun, for example, will curve.

Now it's all well and good to say this happens because of the equivalence principle, but that's not a mechanism. If there isn't a force causing the light to curve, what's doing it? Einstein says this is the wrong question to ask and that what looks like a force is just light taking the only path available.

Here's an imperfect analogy: imagine you're driving up a mountain, maneuvering through twisting switchbacks. If you veer one way, you fall off the mountain. If you veer the other way, you crash into the side of it. So you stick to one narrow path. To the GPS satellites monitoring the position of your phone (but not the mountain or the road), it looks as if your phone, you, and the car are being pushed around by some mysterious force, but in reality you are simply following the only path available.

Except you might think, well that works for light zooming around at 300,000 km/s, but what if there's nothing propelling me? Why am I following any path at all? And the answer is that we are all following a path constantly through spacetime. We're moving forward through time. But in the presence of a gravitational field, spacetime gets warped, and your straight path through it moves a little bit out of time and into space. The "speed" you had going through time gets converted into speed in space, which is why clocks slow down close to a black hole.

Figuring out the specifics of how mass could warp spacetime took Einstein about a decade, but he finally succeeded in 1915, giving the world general relativity. With it came a number of predictions, including the bending of starlight, the correct shape of Mercury's orbit, and the fact that accelerating masses will send out gravitational waves that stretch and shrink spacetime as they pass by. Finally detecting those waves reaffirmed Einstein's genius one more time a century after he first proposed them. And all of that came from Einstein tinkering around with the fact that all objects fall at the same speed.

I said earlier that I equated myself to Einstein, but the truth is I'm no Einstein. I'm a pretty smart guy, but not a genius, and certainly not one of the greatest scientific minds in history, capable of deducing fundamental and quantitative physical truths about the universe from simple thought experiments. What can I possibly hope to achieve compared to that?

But there is an equivalence between me and Einstein, because in reality he was no Einstein, either. It took him a decade to complete general relativity because, talented though he was at math, he was not a mathematician and had to learn an entirely foreign branch of it to make his theory work. He got help from a mathematician friend of his, Marcel Grossmann, who was familiar with Riemannian geometry. That branch of math was invented in the 19th century by a couple of guys, including Bernhard Riemann.

The idea of looking at space and time as a unified thing was partly inspired by Hermann Minkowski, who applied geometrical concepts to Einstein's special relativity. Before Einstein even got to special relativity, which was critical for getting to GR, he frequently discussed difficult subjects with a group of likeminded friends that maybe ironically called themselves the Olymipa Academy. And most of the pieces for SR were put in place by earlier physicists, such as Hendrik Lorentz and George FitzGerald.

Black holes were first theorized about by Karl Schwarzschild, who found one of the simplest solutions to Einstein's field equations while fighting in the trenches during WWI. Roy Kerr figured out how rotating black holes behave. And many others over the ensuing decades contributed to the theory.

As far as gravitational waves are concerned, Einstein himself waffled as far as whether they even existed. But even so, he originally showed only that they could exist and radiate away energy. Solving general relativity for the shape of gravitational waves emitted by two inspiraling, merging black holes took until the 90s. In fact, it was only accomplished with the help of supercomputers using numerical techniques.

And even ignoring the many contributions from theorists not named Einstein, his prediction about gravitational waves would have meant nothing if we did not have the means to detect them. The feat accomplished by LIGO this past week involved scientists who are experts in interferometry, optics, vacuum chambers, thermodynamics, seismology, statistics, etc. The effort required theorists, as well as experimentalists, engineers, and technicians.

I don't mean to imply that Einstein's work would be for naught without the janitors who cleaned his office, that he couldn't have done it without all the little people supporting him. I mean that Einstein's contribution to the discovery was only one part of a vast web of contributions by a host of extremely talented people, alive and dead, who did things Einstein couldn't have done.

On Thursday, we all learned the magnitude of what they had accomplished. Rumors of the discovery had been swirling around for awhile before it was announced. By the time I arrived at school on Thursday to watch the LIGO press conference, I had a pretty good idea of what they were going to say.

Yet that didn't detract from the occasion. Packed into a lounge in the physics department, students, TAs, professors, and I--maybe a hundred altogether--watched the press conference webcast on a giant screen. We all cheered when the discovery was confirmed and cheered again when we heard the primary paper had already been peer reviewed. Half an hour in, I had to leave to go to my theoretical astrophysics course. There, the professor and TA set up a projector and we all continued to watch the press conference. When the webcast ended, the professor took questions about gravitational waves.

Being a part of that, in the minutest and most indirect way, was thrilling. It was a day when Einstein's greatest theory was confirmed yet again, when a new field of astronomy began, and when a thousand scientists got to tell the whole world about the amazing thing they had discovered.

There's a certain--possibly strained--equivalence to my wax museum Einstein moment from 20 years earlier. School was involved, as well as a story about Einstein. But this time I was listening to that story. My passion for science and learning has remained constant, but the attitude has changed. Back then, and for a very long time after that, I took joy in knowing more than others, in being the smartest guy in the room.

Now I know that's not the case. But I also know it doesn't matter. We just don't learn about the universe by sitting alone and thinking brilliant thoughts. That is, at most, one part of the process. So I don’t have to be a mythical genius to contribute. I can be a part of something amazing, of humanity's quest to understand the world around us, just by collaborating with others who are as passionate as I am. I haven't done it yet, obviously, but just as Einstein's magnificent theory has been reaffirmed, so too has my drive to be a scientist.

Sunday, February 7, 2016

Who Cares What Old, Dead White Guys Thought?

The title of this post is inaccurate if you don't consider the ancient Greeks to have been white. But that's probably not a discussion I want to get into right now. Anyway, today we're discussing my ancient philosophy course from last semester, or more precisely, my Socrates, Plato, and Aristotle course.

There are two main points I'd like to articulate: (1) if philosophy has made objective advancements in the last 2,400 years, why should we care what philosophers thought 2,400 years ago, and (b) man, I had a really annoying classmate in my ancient philosophy class. In essence, I'm wondering whether it was worth it to take this class, just as I had similar concerns about the value of paper writing in my philosophy in literature class from last spring.

To think about the first point, there are two paths you can go down. First, you can go the "philosophy is the mother of science" route and wonder where that leaves philosophy nowadays. That is, there used to be essentially no distinction between being a philosopher and a scientist. Science is a relatively new word, and people like Newton were referred to as "natural philosophers." Science was just doing philosophy about nature rather than philosophy about justice or god or what have you.

The usual argument you see here is that philosophy birthed the sciences we're familiar with today, and where it's done so, philosophy is obsolete and the science is all that's left. There are still philosophers of physics today (after all, I took a class on that, too), but they're not doing physics. Philosophers of physics no longer ask whether the world is made of four fundamental elements, or if all matter is composed of atoms, or if the planets travel in perfect circles, because physicists have definitively answered those questions (no, depends, no).

So the domain of philosophy has shrunk. Where philosophy about the natural world is still relevant, it's in asking questions about physical models, rather than coming up with the models themselves. (Metaphysicists might disagree, but a lot of modern philosophers don't hold metaphysics in particularly high regard, as I understand it.) Similar shrinkage has occurred in the other sciences, with psychology being one of the latest disciplines to squeeze philosophy further.

Here I want to look at a particularly egregious example from my ancient philosophy course, Plato's tripartite soul. Plato reasoned that a statement and its contradiction cannot both be true at the same time. This is reasonable and one of the foundations of classical logic. Take a statement like, "The sun is yellow." Either that statement is true, or the statement, "The sun is not yellow" is true. They can't both be true, because one implies a contradiction of the other.

So then let's look to the soul. We've all had the experience of simultaneously wanting and not wanting the same thing. "I want to eat that chocolate cake" and "I don't want to eat that chocolate cake" are thoughts we can have at the same time. In the first instance, it's our carnal desire for the cake, but in the second instance, it's our willpower that's talking. But if the law of non-contradiction holds, it can't possibly be true that we can both want and not want a piece of chocolate cake simultaneously.

...unless we have a divided soul, as alluded to above. Plato identifies three different competing interests in the human psyche that can produce contradictory desires. Roughly, these are the appetitive, passionate, and rational parts of the soul. They are distinct and incompatible, Plato argues, otherwise the law of non-contradiction is contradicted.

And that's all well and good, and proceeds from some reasonable assumptions, but it's baloney as far as modern neuroscience and psychology are concerned. What a hundred years of research into the brain have taught is that the brain is really complicated, possibly the most complicated three pounds in the universe, and it's decidedly not true that you can chop it up into distinct, one-pound chunks.

(I've cleverly switched from talking about the soul to talking about the brain, but a distinction between the two was not necessarily important to Plato, and science says that "the mind is what the brain does.")

There are two main ways in which Plato's tripartite soul fails as a theory. The first is that there are probably many components to the human psyche, far more than three. The second is a subtle problem that has plagued philosophers for thousands of years, which is that it's possible for words and concepts such as "want" to have different meanings depending on the context. So you can want something, and you can want* something. The former may mean "desire enough to actively pursue," whereas the latter might be "like thinking about but have no inclination to pursue." In that case, you can not want something, and also want* it, and there is no contradiction.

This is a tricky problem that crops up all over the place, which is why analytic philosophers spend large chunks of their time trying to tease apart just what we mean when we talk about seemingly plain concepts such as free will or beauty or truth.

But if all we have to go on is what remains of a large, sometimes disjointed collection of Plato's writings, it's easy to find flaws in his logic. His work cannot defend itself. It's also possible that those old, dead white guys were just wrong about stuff. They had a limited amount of data and lacked the thousands of years of philosophical tradition (that they began) to draw upon.

Which brings me to my annoying classmate. During lecture, he frequently raised his hand and asked the instructor questions such as, "But doesn't that produce a contradiction?" and "But wouldn't that mean nothing is beautiful?" and "But didn't Plato condone slavery?" And every single time, the instructor would engage with him and answer his questions in a thoughtful manner.

Terrible, right? Provoking the instructor into discussing philosophy with us. Well, yes. We had two lecture periods and one discussion period per week, and he brought up his objections during the lecture period. His interruptions were so frequent that there was material we were never able to cover in class. And all of this was possible because, yes, duh, Socrates and Plato and Aristotle were wrong about stuff. It was very frustrating, but I suspect I'm coming off as kind of petulant here, so let's go back to Plato for a moment.

While Plato did divide the mind into three different parts, he had particular affection for one of those parts: the rational mind. It was through employing the rational mind in dialectic that truth could be revealed. This is where Plato's allegory of the cave comes in. Plato conceived of a metaphor where the reality we perceive is just shadow puppets lit by torchlight that we are forced to watch in some kinky Clockwork Orange setup.

Philosophers, however, have broken out of the cave and can see real objects illuminated by the pervasive, powerful sun. So there's a distinction between the ever-changing, distorted, and 2-dimensional shadows we think of as reality and the constant, colorful, 3-dimensional objects that actually compose reality. When we see a chair, we are only seeing an indistinct, imperfect shadow of a chair that does not fully encompass the essence of true chairness.

At first blush, this whole idea seems patently ridiculous. We all accept that our eyes can deceive us and that reality is maybe actually electrons and protons, but it seems laughable to suggest that in some eternal, unchanging realm there exists the true forms of the objects we behold here. Where is this realm? Is there a form of the electric fan there, the cell phone, the credit card offer?

Well, it's unclear how diverse Plato intended his realm of forms to be, but he almost certainly thought it was populated by mathematical objects. Many ancient Greeks (including Plato) took math and geometry as the model of a priori knowledge, knowledge we could come to know just by thinking logically and without relying on evidence from our senses. To Plato, this meant accessing Platonic forms.

So there's some ideal triangle out there, as well as a perfectly straight, infinitesimally thin line, and also the true form of the number 5. Again, this sounds plainly absurd. But let's look at a particular number, such as the ratio between the circumference and diameter of a circle: π.

In a little over a month, it will be Pi Day, which means the internet will be stuffed with memes about π pies and whether ϕ is the true constant and how π is a magical number that contains everything in the universe.

That last one relies on a conjectured property of π, that it is a normal number. A normal number is one that has an endless sequence of digits in a non-repeating pattern that are distributed perfectly randomly, with no particular numeral being more likely than any other. Assuming that’s true, then if you peer deep enough into the digits of π, you will eventually find your telephone number, or a bitmap of your face, or your life story written out in ASCII code.

But you'll also find a lot of nonsense, and there's no way to tell the true from the false, so this is more like Borges' Library of Babel than, say, the Encyclopedia Galactica. It’s true that highly random data has a lot of information in it, but there’s nothing profound about that; that’s numerology, not number theory.

Additionally, it turns out that almost all (real term) the real numbers are normal, but it's not easy to pick out any particular number and say that it's normal. Currently, there is no proof that π is normal, although the evidence suggests that it is.

But what if there is no proof? What if it turns out to be impossible to demonstrate rigorously that π is a normal number? (You can often prove that it's impossible to prove something in math, but maybe a proof is just never found.) In math, a statement is only taken to be true if can be proven via deductive logic. So if there is no proof that π is normal, is it normal?

Well you're probably thinking, it's either normal or not, duh. Its being normal doesn't depend on whether or not we're smart enough to prove it. The Earth was four and a half billion years old long before we were able to show, scientifically, that it was. But look what's happened here. We've asserted that π has definite properties independent of our conception of it. That is, we're saying π is real, as real as the Earth, and that it has a form beyond our crude and incomplete perceptions.

So perhaps Plato's forms are not as crazy as they sound. Now, I'm not arguing that Plato is correct and that numbers are "real." This is a lively debate in the philosophy of mathematics (a subject I'll have more to say about at the end of this semester), with the other positions being "idealist" and "anti-realist." But Plato originated (or was the best, earliest articulator of) one tradition in this philosophical debate.

Which brings me back to my annoying classmate. If instead of a philosophy course, this had been a course on the history of Ancient Greece, at no point during the lecture would a classmate have interrupted the instructor with, "But teacher, weren't the Athenians wrong to butcher and enslave whole cities?" Of course they were wrong! That is clearly not up for debate. What's interesting, however, is why the Greeks did what they did, and how their actions propagated through history. That is, I want to understand the legacy they left behind, the traditions they began.

And that's how I look at an ancient philosophy course. To me, it's not primarily about finding all the myriad logical inconsistencies in the thoughts of some old, dead white guys, but in understanding how their thinking shaped humanity for millennia to come. In some cases, their ideas are obsolete and need to be discarded, while in others they represent the seeds of debates still flourishing in philosophy now. The greatest difference I see is that philosophers today strive for precision and nuance so as to avoid falling into the same old traps. But we couldn't have gotten here, couldn't have learned that lesson, without first falling in.