Sunday, April 12, 2015

If the Sequence Fits...

Okay, we're doing an old-fashioned blog post today, wherein I recount one of my recently completed labs. The lab portion of this semester's classes comes from my astrophysics course. This might seem a little weird, because we don't all have telescopes at our lab benches.

Hello, Edwin Hubble.
Instead, we're given data that we must analyze via Matlab. Interestingly, this is probably a bit closer to what real astronomers do, because astronomy today is less peering through a telescope in the wee hours of the night and more writing code to make sense of numbers sent to you from an observatory in New Mexico or Chile or space.

Hello, Hubble Space Telescope.
I've decided to blog this particular lab because I think it has the most interesting plots, which might be just the kind of statement required to turn away what few readers I have left. Specifically, we're looking at Hertzsprung-Russell diagrams, which are a very peculiar kind of graph astronomers use to confuse laypeople. Here's what they look like according to wiki:

Thanks, Wikipedia.
So the x-axis represents temperature, and higher temperatures are to the left. On the y-axis we have luminosity, which increases as you go up. What makes these diagrams strange is that it's not immediately clear what they tell you. Are you looking at different classes of stars? The same star at different times in its life? Stars at different distances (and thus ages) spread out all over the place? The answer is yes.

If you simply point your telescope at the sky, find a bunch of stars, and plot them on an H-R diagram, the only thing you will know with any certainty is that they're not all the same star. To get useful information from this diagram, you have to be specific about what you're looking at.

For this lab, we were looking at open star clusters, which are groups of stars that all formed from the same giant molecular cloud (real term). If that's true, then you can assume that all of the stars in the cluster are roughly the same age and roughly the same distance away from you. If you plot a cluster on an H-R diagram, a particular feature suddenly pops out: that big diagonal line called the the main sequence.

From astrophysical theories, we know that stars on the main sequence are those that are burning hydrogen in their cores. This is what our star is doing; it's what most stars that we look at are doing. Eventually, as a star gets older, it burns through all of the available hydrogen in its core and moves off of the main sequence (top right-ish) and becomes a giant of some sort, and then much later stops fusing at all and becomes a stellar remnant like a white dwarf (bottom left-ish).

What the existence of something like the main sequence means is that if a star is burning hydrogen in its core, and it's at some particular temperature T, then it will also be at some particular luminosity L. One demands the other. There is a pretty concrete relationship--for a main sequence star--between its mass, temperature, luminosity, and lifetime. Bigger stars burn brighter and hotter, go through their fuel more quickly, and thus leave the main sequence sooner.

But as I said earlier, if you just point your telescope at a bunch of stars, it's hard to know what you're looking at. In fact, the only information you get from a telescope about a star is how bright it is, and brightness is a result of a star's intrinsic luminosity as well as its distance from you. The farther a way a star is, the dimmer it is. Because of that, you don't always know if you are looking at a bright star far away or a dim star close to you. So how are we able to figure out a star's luminosity and temperature?

By restricting how we look at the star. Another difference between the popular image of astronomers and the reality is that the telescopes astronomers use today don't just indiscriminately collect all the light that hits them. In fact, some telescopes don't collect visible light at all. Some, like the Arecibo Observatory in Puerto Rico or the Very Large Array in Contact, for example, collect radio waves.

From APOD.
These telescopes look very different from visible light telescopes because light at different wavelengths has different properties that determine how that light moves. This necessitates different equipment. You know this just from looking at a prism. We all know a prism splits white light into a rainbow, but the reason it does this is because different wavelengths of light (different colors) bend at different angles depending on the medium they're moving through.

If this has an effect just between different colors of visible light, imagine the effect between visible light and radio waves and x-rays, for example. But at the visible light level, this discrepancy between how light behaves at different wavelengths means that you can collect more accurate information about an object if you look at it through filters that only pass specific ranges of wavelengths. This way you can calibrate your machinery just for those wavelengths and not worry about anything else.

There are a lot of filters astronomers use to look at stars. For this lab, we looked at stars through B and V filters, which eye-rollingly stand for blue and visible filters. It's enough to know that the B filter looks at bluer (shorter wavelength) light and the V filter looks at redder (longer wavelength) light. If a star is brighter in the B filter than the V filter, this corresponds to a hotter star. That's because stars roughly follow Wien's law, which says that a blackbody's peak wavelength--the wavelength at which it emits the most light--is inversely proportional to its temperature. So the more light at shorter wavelengths, the higher the temperature.

This observation lets us construct a particular H-R diagram called a Color-Magnitude diagram. For boring and annoying reasons (blame Hipparchus), astronomers measure the brightness of objects with the magnitude system, where smaller values represent brighter objects. For our CMD, the y-axis is the magnitude of light coming through the V filter (so higher on the graph is brighter, which means lower magnitudes). The x-axis, which is supposed to be temperature, is instead the quantity B-V.

Recall, if there's more blue light than red light, the star is hotter. More blue light means a lower B magnitude than V magnitude, which means hot stars will have a low B-V. Since temperature is plotted from hot to cold on the H-R diagram, this means we go from low B-V to high B-V on the x-axis.

So now we are plotting the B and V filter magnitudes of stars in the cluster M41, which we're assuming are all roughly the same age and distance from us. Here's the plot:


Hey, that looks kind of similar to wiki's H-R diagram! There's a clearly visible main sequence starting in the top left and moving down and to the right, and then there's a weird branch in the middle. Those are giants of some variety or another that have turned off of the main sequence. We can predict that this is a relatively young star cluster because it doesn't seem to have much in the way of stellar remnants (stars below the main sequence). What else can this CMD tell us?

For the purposes of the lab, we engaged in a process known as main sequence fitting that lets us figure out the age of and distance to a cluster.

As I mentioned earlier, brighter, hotter stars burn faster than dimmer, cooler stars; they leave the main sequence more quickly. So if all of the stars in a cluster form at roughly the same time, this means young clusters will have a pretty even spread of hot and cool stars, but old clusters will mostly have cool stars, because the hot stars will have stopped burning long ago. On an H-R diagram, this means that the main sequence of a cluster will slowly shrink over time, beginning with the stars in the top left. So where the main sequence ends, called the turn off point, corresponds to the youngest age a cluster could be. If it were any younger, then you would see hotter, shorter-lived stars farther up the main sequence.

This can be taken a step further. Through stellar evolution models (produced by computer simulations), you can plot the absolute magnitudes of various types of stars at a particular age. These models are called isochrones, because they show you a line of stars at a constant age. If you can match the features of your isochrone (such as the turn off point) to the features of your real cluster, you can date the cluster. In our lab, we had isochrones ranging from 100 million years old to 11 billion years old.

So let's date M41. First, let's compare it to the 11 billion year old isochrone (in red).


As you can see, this clearly doesn't fit. It's way farther to the right and way higher up than M41. But let's think about something for a moment. Being way farther to the right means it only has cold stars, which are old stars. We predicted above, because of the lack of stellar remnants, that M41 was probably young, so this makes sense.

By why is the isochrone so much brighter than M41? Here we can be fooled. We are seeing the cluster as bright as our telescopes see it, but the isochrone is a computer model which plots stars as bright as they would be if they were 10 parsecs (about 32.6 lightyears) away. Something seen at 10 pc is said to be seen at "absolute magnitude" for uninteresting historical reasons. If we were to adjust the magnitude of the isochrone, moving it up and down the y-axis, then we would also be adjusting the distance at which we saw it--the farther down the y-axis, the higher the magnitude, the dimmer the isochrone, the farther away it is.

We won't bother with that here, because this isochrone is obviously too old for our cluster. With some fiddling, we can find an isochrone that does fit. Specifically, the 300 million year isochrone.



This looks to have the right shape but is way too bright. So we know that our cluster is farther away than 10 pc. If we adjust the magnitude of our isochrone, we can get a better fit.


This isn't perfect, but the very nice alignment with the main sequence is encouraging. To get this match, we adjusted the magnitude of the isochrone by 9.2, which doesn't mean anything to anybody not steeped in dreadfully tedious astrometrics.

People steeped in dreadfully tedious astrometrics.
But here's the gist. Magnitude is a logarithmic scale, which in this case means that increasing the magnitude of an object by 5 decreases the brightness by a factor of 100. Because light gets dimmer with the square of your distance from it, an object 100 times dimmer is 10 times farther away. Doing the math, this means a 9.2 magnitude difference works out to the cluster being 69 times farther away than the isochrone, or 690 parsecs from us.

Looking up M41 on wiki (reliable?), it gives a distance of 710 parsecs and and age of 190 to 240 million years old. Not bad.

We then did the same thing for cluster M67. With many more stellar remnants (bottom-left), it looks like M67 is probably older.


After another round of main sequence fitting, this is our closest match.


An isochrone 3.5 billion years old with a distance modulus of 9.7, corresponding to 870 parsecs. Wiki says M67 is 3.2-5 billion years old and 800-900 parsecs away. Again, not bad. In fact, a better fit.

So that's main sequence fitting, one rung in the cosmic distance ladder (real term) astronomers use to show us how insignificant we are (by demonstrating the vast scale of the universe).

Saturday, March 21, 2015

Euler Unmasked

We're going from straight philosophy in my last post to straight math in this one. But if you're an ancient Greek thinker type person, math and philosophy are the same thing, anyway.

So about a year and a half ago, I made a post that touched briefly on the relationship between trig functions and exponential functions as a way of justifying my tendency to make things more complex than they need to be. I mentioned there that I didn't have a firm enough mathematical grasp to explain how these two mathy bits are related. Well, the topic of Euler'sidentity came up a little while ago in my writing group, so I decided to do some research and figure out just how it is that trig functions and exponential functions come together.

For those of you that don't click links, Euler's identity says:




This is a pretty remarkable and frankly incredible equation, but it's true. It manages to link probably the three most famous mathematical constants in a very simple way. The identity arises from Euler's formula, which says:




If you replace x with π, then isin(π) = 0 and cos(π) = -1, so with a little rearranging you can get Euler's identity. But this raises the question of why it should be true that exponential functions and trig functions are connected by the imaginary unit.

First, a quick primer for those who need it. In the common parlance, something that is "exponentially" better is "really super" better. This kind of talk tends to aggravate the mathematically aware, however. Really, exponential functions are ones where adding a constant increment to the input multiplies the output by a constant factor.

So if you hear something like, "Kyrgyzstan's GDP has doubled every year for the last ten years," then that's exponential growth. The factor is 2, and the increment is yearly. But this also applies to, say, the interest rate on your savings account, which as we all know is not exactly "really super" better than anything except possibly 0. There, your balance is getting multiplied by something like 1.0025 every year, which is every bit as exponential as Kyrgyzstan's doubling GDP (totally made up).

The point is, however, that exponential functions (with a factor greater than 1) demonstrate constant (monotonic) growth. If you increase the x value, the y value will increase, too.

Trig functions, on the other hand, are the realm of waves, which go up and down and up and down. They are all about rhythmic or periodic behavior. But as their name suggests, the trigonometric functions are actually based on the angles formed by triangles. Trig functions are really expressions of the Pythagorean formula, A2 + B2 = C2. The relationship between this formula and periodic motion is that for some constant value of C, increasing A will decrease B, and vice versa.

So it's hard to see how exponential functions and trig functions could be related. As I hinted up above, the answer is through i.

i, the imaginary unit, is what the square root of negative one is defined to be. Imaginary numbers kind of get a bad rap, partly because of their name. They seem like something mathematicians just made up that couldn't possibly be real. The funny thing is people had the same opinion about negative numbers for a very long time. After all, how can you possibly have -3 apples? On this whole controversy, the great mathematician Carl Friedrich Gauss had this to say:
That this subject [imaginary numbers] has hitherto been surrounded by mysterious obscurity, is to be attributed largely to an ill-adapted notation. If, for instance, +1, -1, √-1 had been called direct, inverse, and lateral units, instead of positive, negative, and imaginary (or even impossible), such an obscurity would have been out of the question.
While his preferred notation might seem somewhat opaque, it does lend itself very well to a geometric interpretation of numbers. If you look at a Cartesian plot, you can think of Gauss's direct, inverse, and lateral numbers this way. 




The direct unit (+1) moves you one to the right on the graph. The inverse unit (-1) moves you one to the left. And the lateral unit (√-1) moves you up one. Rather than being on the number line we're used to, imaginary numbers can be thought of as being at right angles to it.

This idea lets you plot numbers that are a combination of "real" and "imaginary." So if you have the complex number 3 + 2i, that's just 3 units to the right and 2 units up.


As you see, plotting numbers this way means you can draw right triangles that are related to those numbers. This is the first way that we can connect imaginary numbers to the trig functions. Getting from imaginary numbers to exponential functions will take a little more work, though.

If i is the square root of -1, we can play around with exponentiation to find an interesting pattern. i2 = (√-1)2, which by definition equals -1. i3 = (√-1)3, or (√-1)* (√-1)2, or i*-1, which just comes out to -i. i4 = i2 * i2, or -1 * -1, which equals 1. Multiply that by i, and you of course have i again. So through exponentiation, we have discovered something of a pattern.

i1 = i
i2 = -1
i3 = -i
i4 = 1
i5 = i

The exponents of i loop back in on themselves. You might even say they exhibit periodic behavior, like the trig functions.

Our next step is probably the toughest bit. Bear with me. So, if you recall from my foray into Fourier, many functions can be expressed as an infinite series of sines and cosines that eventually converge on a desired function. These infinite series turn out to be very useful to mathematicians, because not all patterns can be expressed as "elementary" functions, but only as infinite series of some other type of function. One type of infinite series is the power series, which looks like this:




To get different functions, just plug in different values for the coefficients an. The way you figure out which coefficients correspond to the function you want is basically by assuming your function can fit into some power series and then just playing around for awhile until you find a pattern that fits. Let me demonstrate.

One of the defining features of the exponential function, ex, is that it is its own derivative. This means that its rate of change is equal to its value. So the derivative of ex is also ex, and so on.

One of the first tools you learn in calculus is that the derivative of a power function like x4 is 4x3. You multiply by the exponent, and then lower the exponent by one. If the exponent is already 0, then your derivative is 0. So if you take the derivative of our above model power series, you get:





And if you take the derivative of that, you get:





And if you take the derivative of that, you get:





And one more time, because there's a pattern I want you to see:





Now remember, all of these series are equal to the function ex, because ex is its own derivative. The missing ingredients are the values of an. If we evaluate ex at x=0, we have e0, and anything to the 0th power is equal to 1. In the above series, when x is 0, everything except the leading term is also 0. So we have:

1 = a0 = a1 = 2a2 = 6a3 = 24a4

and so on. So with a little bit of algebra, you can figure out the value of any an. It's just 1 divided by the factor preceding the coefficient. But there's a pattern here. 24 = 4*3*2*1. 6 = 3*2*1. 2 = 2*1. The value of the coefficient is equal to 1 over the index of the coefficient multiplied by each integer lower than it. This is known as a factorial in mathematics and looks like this:

5! = 5*4*3*2*1 = 120

With that information in hand, we know what the power series of the exponential function is:







I've gone through this process once so that you don't think I'm pulling this stuff out of a hat, but you can do the same thing to find the power series of a lot of different functions, including the trig functions. For example, the power series of sin(x) is:







And the power series of cos(x) is:







Weirdly, the sine and cosine power series look kind of similar to the exponential function, but with terms missing and some negative signs thrown in. This curious fact turns out to be very important for connecting exponential and trig functions. Let's remember that the key to that connection is i.

Let's see what happens if we try to find the power series of eix rather than ex. To do that, we just replace all instances of x with ix in our series above. That gets us:






Hey, that means we're finding powers of i. But we already did that up above. That follows a pattern, so we can just fill in from that pattern and get:







Now, just for the heck of it, let's separate our series into terms without i and terms with i. So we have:







Look familiar? That's the power series for cosine plus i times the power series for sine. In other words...




Just as Euler told us.

All of this may seem like some kind of tedious mathematical trick. After all, how do we know that the power series representation of a function behaves identically to the function itself in all instances? The truth is, it doesn't, and that's one of the things you have to be careful of when finding series expansions. It does happen to work in this case, though.

But there are ways in which this proof can help motivate understanding. One way to think of the idea is that the introduction of i into the exponential function breaks the function down into four interacting parts: one increasing in the direction of 1, another increasing in the direction of -1, and two others increasing in the direction of i and -i. Different values of x contribute more to one direction than another, and the whole thing repeats with a period of 2πi.

To see if this picture holds true, let's take another look at the powers of i. We saw that powers of i cycle from i to -1 to -i to 1 and then back to i again. But we were only looking at integer powers of i. What happens if we replace the integer with an unknown variable x? That is, how do we evaluate ix?

A neat tool that can sometimes work in mathematics is to perform some operation on an expression and then also perform the inverse of that operation. Doing so doesn't change the expression, but it does let us look at it in a different light. So how about we take the natural log of ix and then exponentiate the expression. That gets us:





The laws of logarithms mean we can move that x to outside the log, giving us:





We know how to evaluate ex, but it’s not immediately clear how to evaluate ln(i). Here it's useful to remember what ln means. The natural log of some number is the power to which you must raise e in order to get that number. So if you have, say, ln(e2), then our answer is 2, because e to the power of 2 obviously equals e2. So let's look at it this way: e to what power equals i?

Now we bring in Euler's formula again.

eix = i when cos(x) = 0 and isin(x) = i

This is true for x = π/2, because cos(π/2) = 0 and sin(π/2) = 1.

So then ln(i) = iπ/2, which means that ix = eiπx/2 = cos(xπ/2) + isin(xπ/2). With that conversion, we can evaluate i to any power at all, not just integer powers. But to reaffirm that this isn't some trick, let's go ahead and see what evaluating it to integer powers means.








This is the exact same pattern we saw above, but this time through the lens of Euler's formula rather than the logic of manipulating √-1. For non-integer values of x, you get complex numbers that, when treated as vectors on the complex plane, are all a distance of 1 from the origin, creating a circle of radius 1. Through purely algebraic means, this connects back up with the geometrical interpretation of imaginary numbers suggested by Gauss.

Okay, I'm done now. I hope this sheds some light on the interconnectedness of math, which can be demonstrated by taking the rules you're familiar with and applying them to unfamiliar situations. When people speak of the beauty of math, this is it. In the real world, we often find depth and meaning through metaphors that connect disparate ideas. That's what art and literature are all about. Math does the same thing, but with numbers, letters, and funny symbols.

(On the other hand, I may have written this post just to play around with LaTeX.)

Friday, March 13, 2015

On Dumbledorean Realism

I wrote a paper this week for my literature in philosophy course discussing the dream argument. Because it's been a little while since my last post, I think I'll reproduce the paper here (with a few changes) just for the heck of it. I procrastinated, though, which means I wasn't quite able to make my point as well as I had intended.

The gist of my argument is that there is no way to define a concept of a "real world" that resembles the world we inhabit (and are comfortable calling the real world) while simultaneously excluding the possibility of "unreal worlds." This leaves us with two possible conclusions: (1) if we do actually inhabit an "unreal" world, then unreal worlds are what reality actually is; or (2) we inhabit an unreal world and real worlds are nothing at all like the type of world we live in.

When talking about the world we seem to live in, I lean toward option 1 because I think it allows us to do some work ontologically. That is to say, I think we can feel justified in calling real many things that might not seem to be real depending on your point of view (subatomic particles, ideas, time, etc.). When talking about my truly fundamental beliefs, however, I subscribe to a system that you might say is a combination of options 1 and 2. But that's a whole 'nother bag of beans (worms? shrimp? cats?--a quick googling doesn't settle this). Anyway, without further ado, here's my damn essay. Oh, also, spoiler alert for the final Harry Potter. But come on, I haven't even read the book and I know what happens.

Near the end of the final book in J. K. Rowling’s Harry Potter series, Harry Potter and the Deathly Hallows, Harry has a seemingly impossible conversation with his mentor Albus Dumbledore. The seeming impossibility of this conversation is predicated on both characters apparently being dead at the time. As the conversation draws to a close and Harry realizes that he might not actually be dead, he asks Dumbledore, “Is this real? Or has this been happening inside my head?” The ever clever Dumbledore answers, “Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?”
This brief exchange alludes to a problem that philosophers have wrestled with at least since Descartes and to a plot device employed in many works of fiction, from Borges’ short story The Circular Ruins on through to contemporary films such as The Matrix and Inception. The problem is this: what is the difference between the real world and one only inside our head, or one that is illusory or fictitious? To get to the heart of the matter, the question is often posed thusly: how do you know that you are not dreaming or being dreamt? If we could answer this question succinctly, then we would have a clear conception of what the real world is and whether or not we are in it.
I think it might be useful, however, to tackle this question from the opposite direction. So the question might instead be posed: how do you know that you are dreaming? That is to say, if we assume that you are dreaming, what could happen in the dream world that would allow you to correctly conclude that you are, in fact, dreaming? There is an easy but unsatisfactory answer that immediately comes to mind—you could wake up. Unfortunately, all this tells you is that you were dreaming; it gives you no information about what’s happening to you in the moment.
In fact, waking up doesn’t even tell you that you’re not dreaming, because it is not entirely uncommon to have a “dream within a dream” à la Inception. That phrase may be something of a misnomer, though, for what it describes seems no different than moving from one dream to another, an experience with which many of us are also familiar. It is more accurate to say, then, that dreaming can be followed by the apparent experience of waking up, regardless of whether or not we actually do wake up.
Rather than focusing on waking up, it might be useful to examine elements of dreams that strike us as particularly dream-like. But if we’re dispensing with waking up, we can generalize dreaming to include other types of unreal experiences, such as being simulated, fictional, dreamt, or imagined. The common thread that binds these experiences is an apparent disconnect between our subjective awareness and what the real world truly is. It may seem something of a leap to lump in these other concepts, however, because all of us have had the subjective experience of dreaming but few of us would claim to have ever been a fictional character. In comparing these disparate types of unreality, then, we must consider not what it feels like to be that way but what elements are common to our conception of unreal worlds.
I posit that there are four features we might say are characteristic of various forms of unreality. These are abrupt changes, rule violations, missing information, and absurd scenarios. To get an idea of what I mean by these terms, a few examples might be necessary.
We’ve already seen examples of abrupt changes just a few paragraphs up. If you move from one dream to another, then the steady flow of reality has been altered, continuity broken. You may have been dreaming of playing in the World Series and then suddenly shifted to a dream of your wedding day. More generally, abrupt changes abound in our unreal creations. In chapter 6 the main character may decide to take a trip across the country, and in chapter 7 the main character may arrive without the intervening journey having been written by the author.
Rule violations would seem to be the most obvious feature of unreality. Natural laws apparently govern what we are comfortable calling the real world, so an unreal world should not feel bound to obey said laws. Stories taking place in a fantasy or science fiction setting are often rife with events that could not happen according to the laws as we know them. Dreams very often involve impossible happenings, such as reunions with long-dead relations or the ability to fly by flapping your arms. The only limit to what may happen in an unreal world is our imagination, and I can imagine a being possessing a far greater imagination than I have.
Our next unreal attribute is a little harder to pin down. Missing information is the fact that unreal worlds are often insufficiently detailed. An author may write a mundane, temporally continuous story where nothing out of the ordinary happens, but it is very unlikely that the author will describe, unless motivated to do so by story concerns, how that character’s internal organs function, or what’s happening on the other side of the world. This might not seem troubling; after all, I am not constantly aware of everything happening inside my body. But if a fictional character can have a subjective experience produced by the work of fiction that character inhabits, does that character have internal organs not written about? Worse still, if a fictional character is in a room described as merely “plain” or “having four walls,” how rich are the perceptions of that character regarding the room? This is missing information.
Finally, unreal worlds are very often absurd. What constitutes absurdity can certainly be a matter of opinion, especially because I am distinguishing this from scenarios that explicitly contravene physical laws. So for our purposes, absurd scenarios are ones that are prohibited by no natural laws but that we are confident would never happen in reality due to their implausibility. I may dream that I am trapped in an elevator playing Monopoly with all of my ex-girlfriends; this is a deeply unlikely scenario, but no law ever conceived of by Newton says it cannot happen. Absurdist fiction follows similar lines. Look to any TV sitcom such as Seinfeld for examples of situations that may not be physically impossible, but certainly aren’t likely.
With the features of unreality defined, are we now equipped to correctly conclude, if we’re dreaming, that we are? Unfortunately, we are not. If these four elements are common to unreality, then I can identify three possible scenarios we associate with the real world that could explain these elements.
The first is this: in what we are comfortable calling the real world, our scope is limited. Humans are finite, non-omniscient beings. We gather up our experiences of the world through our senses and derive much more, but not everything, from our capacity to reason and imagine. I mentioned earlier that the impossibility of unreal worlds can be thought of as a product of our seemingly unlimited imagination. And it may be true that our imagination is infinite. But even if it is, infinity is not everything. For example, it can be shown that there is an infinite quantity of rational numbers between 0 and 1 (1/2, 1/3, 1/4 … 1/10,327,452, etc.), and yet none of those numbers is the number 2 (or any other number greater than 1, of which there are an infinite number). So even granting an unlimited imagination, a human’s experience of the world is not all of the world.
Thus we are very often apt to encounter events we have failed to anticipate, events which may seem to violate the laws of the universe or be absurd. Consider the first Native Americans to witness European colonists sailing in giant wooden ships, riding horses, and firing guns. No experience had by a Native American up to that point could have prepared them for such an encounter, and yet it happened and was real. Or consider what it might have been like if an asteroid comparable to the one that killed the dinosaurs had struck the Earth during the course of human history but before the advent of telescopes. The world would have changed abruptly, and the change brought about would have been absurd and seemingly in violation of the natural laws taken for granted. The real world is certainly not a place that can suddenly be engulfed in flames, tidal waves, and blackened skies, we would have thought. But we would have been wrong.
From this we can see that our expectation of what is absurd or impossible is a consequence of the limited scope through which we view the world. It is highly dependent on what we have experienced or imagined so far.
The second scenario in which the defining qualities of the unreal world become insufficient is one in which our senses deceive us. All of us are aware that we can be fooled by optical illusions or that we can hallucinate. We think of such instances as being exceptional, but increasingly research in neuroscience points to our being fooled as the norm. This fact can account for  abrupt changes and missing information, to say nothing of hallucinations in which absurd or impossible events occur. A real example of an abrupt change in the world is that which occurs during a bout of dreamless sleep. It is night outside, and then suddenly it is light and eight hours have passed. We excuse the continuity break only because it happens every day. A further illustration is highway hypnosis, in which we can be in one place at one time and then another place at another time with no conscious awareness of what occurred in between.
Missing information manifests in our shoddy attention to the world around us. Cognitive scientists have great fun demonstrating our inattentional blindness by having us watch videos in which we can miss wardrobe changes, people swapping, or gorillas. All of this demonstrates that we can completely fail to be aware of the real world out there and yet have no sense that we do not inhabit a richly detailed world.
This conception, however, is predicated on there being a real world which we can somehow know despite what our senses tell us. Much of this view arises out of modern science, which has allowed us to build up a representation of the world that is free of illusions and hallucinations but also only marginally connected to what we observe empirically. So while we may see color and shape and contrast, what we know from physics tells us that light is just a wavelength of electromagnetic radiation governed by Maxwell’s equations.
But this modern notion is ultimately borne out of experiments performed and reason applied to the observed results of those experiments. In other words, observation has taught us that observation is flawed. But our observations of the real world and our observations about our observations are flawed in the same way: we do not connect directly to the world but build up an image that is filtered through our senses and constructed by our brain. More abstractly, there is a real world, and there is our experience of that world; they are not the same thing. Here it would be wise to remember Morpheus from The Matrix, who tells Neo, “If you're talking about what you can feel, what you can smell, what you can taste and see, then 'real' is simply electrical signals interpreted by your brain.”
Finally, all manner of unreal occurrences can be accounted for if we live in a world governed by supernatural entities. This is the famous evil demon present in Descartes’ Meditations. But it is also a world governed by any kind of god whatsoever. If we live in a world in which miracles can occur, then we live in a world in which the laws of physics can be flouted, abrupt changes can occur, and absurd events can transpire. Rather than evidence of being dreaming or fictitious, miracles would be evidence in favor of a particular supernatural entity.
Moreover, if something exists that is supernatural, the implication is that two kinds of world exist: the natural and the supernatural. Superficially, miracles connote a world that very much seems to resemble an unreal world. If we are dreaming, dreamt, fictitious, imagined, or simulated, then there is some person or entity which is responsible for and has created the unreal world of which we are a part. We could call such an entity a god.
Some might object here by arguing that this is not what fictional universes are generally like. If an author writes a fantasy novel, there may be gods in that novel, but the author is not usually one of them. And yet it is not inconceivable that such a story could be written. It would be no trouble at all for me to write a story about characters in a world created by the god Ori Vandewalle, who sets forth such and such laws and demands such and such prayers. In a slightly less vain direction, science fiction author Greg Egan has written a trilogy of books, beginning with The Clockwork Rocket, that takes place in an alternate universe with laws of physics different from our own. If we are positing the reality of fictional characters, he has a created a new universe subordinate to and different from our own.
So then we have failed to identify criteria sufficient for determining that we are dreaming. But this failure is not a result of dreaming being too slippery a phenomenon to get a handle of; rather, the conclusion is that the type of awareness that comes from existing in an unreal world is indiscernible from the type of awareness that comes from existing in a real world. That is to say, there is no difference between real and unreal. An “unreal world” is one in which a creator in the “real world” imposes an incomplete, incongruent, potentially impossible image on the inhabitants of the unreal world, an image which may not be empirically similar to the real world. Our real world, on the other hand, is one in which we construct an image of the world from the information that falls into us, and the image we form may be incomplete, incongruent, potentially impossible, and ultimately controlled by a supernatural entity.
We cannot know if we are awake because there is no difference between being awake and dreaming. Or rather, if we are forever dreaming, or being dreamt, or fictional or simulated or imagined, then that’s what it is to be real. We might call this Dumbledorean Realism. Yes, it may all be in our heads, but that doesn’t mean it isn’t real. To say otherwise, to say that being a fictional character is not what it is to be real, is to say that a true real world is one in which unreal elements cannot impose themselves—a world that could not have been made by a creator, where subjective experiences map directly onto the world perfectly, and where all inhabitants are omniscient and could only fail to anticipate that which could not happen anyway.

Sunday, March 1, 2015

Fun with Fourier

Here's the moment you've all been waiting for, folks, when I get off my philosophical soapbox and return to regaling you with exciting tales of studying math and physics. Oh yeah!

Because it's been awhile since I've done one of these explain-what-I-just-learned-about posts, I'm gonna cover a lot of (too much) ground here. This explainer of mine is going to run through Fourier analysis (learned in my math methods course), quantum degeneracy pressure (learned in my thermo class from last semester), and the fate of stars (learned in Astro 121). Whew. So let's get started.

If you've ever seen an orchestra in concert, you know that before the orchestra begins playing, the conductor has the musicians tune their instruments. One person will play a note, and the rest will adjust their instruments to match that note. Listening to this process, a thought may have occurred to you: if all those instruments are playing the same note, why do they each sound different?

This is a complicated question, but the relatively simple answer is that a musical note, along with being described by a frequency (pitch) and an amplitude (loudness), can also be described by its quality or timbre. But what timbre represents can get us into some meaty and far-reaching math.

Say an instrument of some sort plays a Concert A. That means it produces a sound wave of 440 Hz. 440 Hz is just some process that repeats 440 times per second. And a sound wave is just a repeated change in air pressure. With no other distracting information, we could graph such a phenomenon like this:

Fun with Excel.
But there are a couple of problems with this graph, some physical and some mathematical. Let's talk about the physical problems first. Sound is a wave that travels through a medium: air. Air is known for being something of a pushover; you walk right through it all day long as if it weren't even there. But if you've ever encountered a stiff breeze, you know that air is, in fact, there.

Even if the wind isn't blowing, however, air molecules are still going to resist your attempts to push them along. You will have to accelerate them, and you will have to keep pushing the air as each molecule bumps into the next one, transfers its momentum, and loses some energy along the way. The end result is that while your musical instrument may produce some momentary impulse exactly 440 times per second (unlikely), the air's density and viscosity are going to smear out those pressure changes into something more wave-like:

Thanks, Wikipedia.
Let's get into sound's wave properties a little more. Waves operate under the principle of superposition, which says that you can find the amplitude of any wave phenomenon (loudness for sound, brightness for light, etc.) at any point in space by adding up the amplitudes of all the relevant waves at that point in space. This is why the acoustics of a concert hall matter. If the crest of one wave meets the trough of another wave, then your waves cancel out and you're left with a dead spot. Alternatively, if two crests meet, they combine to be louder than either wave individually. This will become important in a bit, so keep it in mind.

The mathematical objection to the above graph goes like this. If I look at a limited portion of the graph, how do I know what the frequency of the wave is?

Not so useful.
The answer is that I don't know. In fact, the smaller a segment of time I look at, the less I can know about the definite frequency of the wave, which means the more possible frequencies the wave could have.

That right there is an interesting way of phrasing things: the more possible frequencies the wave could have. Why, that almost makes it sound as if the wave could have multiple frequencies. Does that even make sense, though? It does, for the reason we talked about above: the principle of superposition. When two waves meet in one place, they combine into one wave. This happens even if the waves have different frequencies.

The discontinuous impulse above, then, could just be many waves on top of each other, with many different frequencies combining in such a way as to cancel out almost everywhere except at precise points. Does this rescue our perfect Concert A? Not quite.

The next question that springs to mind is, where are all these different frequencies coming from? And the answer is that a musical instrument does not produce a note at a single frequency of 440 Hz but many tones at frequencies (harmonics) related to the fundamental of 440 Hz. There will be a tone at 440/2 Hz, 440/3 Hz, 440/4 Hz, and so on, all at different amplitudes depending on the properties of the instrument. The combination of these many harmonics into a single sound is the main component of the timbre, or quality, of a note.

All these different sound waves add together, shifting a wave away from a perfect sinusoid and toward something with a sharp peak. But to get that sharp peak, you need a lot of waves at a lot of different frequencies and very high amplitudes. A musical instrument is only going to provide strong amplitudes at specific overtones of the fundamental, so you're very unlikely to get the original graph up above.

Mathematically, the process of decomposing a single wave into its constituent waves is known as Fourier analysis. In fact, you can represent any periodic signal--or even any "well-behaved" function at all (and some not so well-behaved ones)--as a series of sinusoids of varying frequency and amplitude. You can even perform what's known as a Fourier transform which produces a power spectrum, a graph of the strength of each frequency present in a signal.

The perfect sine wave, which has one well-defined frequency, will look like a spike when you take its Fourier transform, the power spectrum. On the other hand, the sharp impulse, which is made up of many different frequencies, will have a Fourier transform that is spread out. It is impossible to have a signal that is a spike both in time and in frequency. There's a minimum level of uncertainty across the two representations.

Uncertainty, you say? Yes, like Heisenberg's principle. Heisenberg's uncertainty principle can be looked at as arising from the wave nature of all matter. A wave cannot have an absolutely precise location in space while also having an absolutely precise wavelength (which is related to frequency). This comes directly out of the observation made up above: the smaller a slice of time you look at, the less information there is about a wave's frequency, which means the more possible frequencies a wave can have.

A century's worth of experiment has revealed that matter is, in fact, composed of waves. Just as sound waves can interfere with each other to produce acoustic dead spots, electrons can interfere with each other, too. While there are very small and precise experiments such as the double slit that bear this out, there is a rather stunning example that exists on a cosmic scale, too.

So, another interesting fact about electrons is that they obey the Pauli exclusion principle, which says that no two electrons can occupy the same state. Why this is true and what exactly it means is complicated and beyond my current knowledge level, but fundamentally it means that as you compress matter to a denser and denser state, each electron present has fewer and fewer allowed states. This means the uncertainty in the position of each electron goes down, which means the uncertainty in its frequency goes way up. An electron's frequency is tied to its momentum, so the more you compress an electron, the faster it will move.

For particularly dense matter, like the kind you might find in a white dwarf star, this momentum creates pressure which prevents the star from collapsing. However, there is a limit to this pressure. An electron cannot travel faster than the speed of light, which means that as a star gets denser and denser, the increase in electron degeneracy pressure slows down.

In normal stars, the denser it gets, the hotter it gets, and the hotter it gets, the more the star pushes back against gravity, which subsequently cools the star. But degeneracy pressure doesn't come from temperature; it comes from the quantum nature of matter. So as the star gets denser, it gets hotter, eventually leading to a runaway fusion process that annihilates the star in a supernova--a spectacular explosion that can outshine a galaxy and leaves behind a neutron star.

The limit imposed by the speed of light leads to a maximum possible mass for a white dwarf, about 1.4 solar masses, known as the Chandrasekhar limit. A white dwarf cannot exist with a mass any greater than that, and sure enough, no white dwarfs with a greater mass have ever been found. But what's more, because (almost) all white dwarf supernovae happen at 1.4 solar masses, they all look pretty much identical. In fact, the characteristic explosion of a white dwarf supernova is so reliable that it gives astronomers a standard candle by which to measure distances across the universe. And this reliability is a direct consequence of the wavelike nature of matter.

So there you go: from music to cosmology, by way of Fourier analysis. By the way, if you want to combine music and cosmology, check out this guy's site. Without getting into hairy mathematics, he talks about the power spectrum (Fourier transform) of the cosmic microwave background, and how in a very real sense this can be thought of as the sound of the early universe. It's fun stuff.

Wednesday, January 28, 2015

The Dark Ages Versus the Age of Discontent

Because I am somewhat of a "non-traditional student," my class schedule this semester would not immediately lead one to believe that I am an astronomy major. My classes are:

Astr 121 - Introductory Astrophysics II - Stars and Beyond

Phys 373 - Mathematical Methods for Physics II

Phil 233 - Philosophy in Literature

Phil 245 - Political and Social Philosophy I

Hist 111 - The Medieval World

You'll note the surprising dearth of astronomy courses. There are reasons for this, but detailing said reasons would make for a damn boring blog post, so I'm going to talk about something else (hopefully less boring) instead. (Worry not--the next two semesters will be as dense with astronomy courses as neutrons stars are with, uh, neutrons.)

Instead this post is about an interesting juxtaposition of beliefs I encountered in my fellow students. Both my medieval history and political philosophy instructors began class the first day by directly challenging the beliefs held by their students about a relevant subject (a surefire way not to convince the students of anything).

You can probably guess the common misconception in medieval history: the middle ages were a stagnant "dark age" where European savages meekly held onto life, all the while having any hint of progress quashed by the oppressive, aggressive ignorance of the Church.

So, that's false, of course. And I'm sure I'll learn a much more nuanced notion of what the medieval world was like during the next 13 weeks. But the idea that the middle ages were "dark" is a pretty commonly held belief, or at the very least the idea that people believe the middle ages were "dark" is a pretty commonly held belief.

My political philosophy instructor came at us from a different angle, however. He began the first lecture by presenting us with the idea that, compared to the societies in which the famous philosophers we're going to read about lived, we basically live in a utopia. Violence worldwide is lower than it's ever been at any time in history. GDP is leaps and bounds greater than it ever was in history. Yadda yadda.

This notion received a much cooler reception than the notion that the medieval period was not a dark age. I'll get to the difference between these two reactions in a moment, but the interesting point to me is that the default position to both ideas is one of disbelief. People do not believe the middle ages weren't hopelessly terrible; people do not believe now is (relatively) awesome.

At first blush, these two points of view would seem to contradict. How can we simultaneously believe that the medieval ages were terrible but that now is not terrible in comparison? We might believe that both periods were equally terrible, but that's not the general view held by my fellow students. To make the argument for the "dark ages," many pointed to the religious oppression that used to exist, but does no longer; to the authoritative regimes that used to rule, but do no longer; to the diseases that used to be so deadly, but are no longer. So they do not believe that each period is equally terrible.

Another possibility is that my fellow students have a nuanced position: that things used to suck really badly, but now suck only somewhat badly. But again, I don't believe this matches the professed opinions of my classmates. They were aggressively opposed to the notion that things don't suck now. They offered relatively little opposition to my history professor's arguments but jumped on everything my philosophy instructor said. Clearly, my fellow classmates feel very strongly that things aren't much better now. And that, I suspect, is the difference.

Daniel Kahneman and other psychologists have argued that when we are asked a difficult question, our brains take a shortcut by providing an answer to an easier question. We mentally change the question we are being asked to something that has a readily available answer.

So if the question we are asked is, "How good is civilization now compared to the way it used to be?", that's a relatively difficult question to answer. 7, maybe? A much easier question to answer, and one that is vaguely similar, is, "How do we feel about civilization now?" And we all have readily available opinions on the current state of things.

One of the reasons why the second question is easier to answer is because it doesn't ask us to evaluate the past. We haven't been to the ancient past; we don't know what it was really like. Unless we ourselves are historians, we're unlikely to have strong opinions about the past. And without strong opinions, we don't have easy access to "data" on what the past was like.

The other reason why the second question is easier to answer is because, of course, we have "data" about it. We don't necessarily have good statistics about what society today is like (although we might, and college students taking government classes and reading their preferred websites are likely to think they do), but we do have feelings about the present. I don't want to get particularly political here, but we're all inundated with news everyday telling us how terrible things are now, about racist cops, or the rape culture on college campuses, or the decaying moral fabric that holds America together, etc.

I have no desire to deny there are bad things now, that racism and sexism still exist, that our privacies are being eroded, that morally ambiguous wars are being waged, that much of the world still lives in abject poverty, or anything like that. Modern problems are real and worth dealing with, no doubt. What I'm getting at, however, is how those problems make us feel. They make us feel terrible, and we confuse that terrible feeling with what actually is.

Few of us feel terrible about the atrocities committed one hundred or one thousand years ago, however more terrible they may have been than atrocities committed now. You can argue, of course, that there's no reason to feel terrible about the past, because there's nothing we can do about it. We can change the world now, so our emotions do us some good in motivating that change. (The counter to this is something like the Holocaust Museum, which makes us feel absolutely awful on purpose so that we ensure nothing like it ever happens again.)

That's a valid argument, but it misses some nuance. Let's say that the world today is only half as bad as it was a hundred years ago, by some measure of Objective World Awesomeness (OWA). Do we think, then, that the feelings people had about the world a hundred years ago were twice as powerful as the feelings we have today? I sincerely doubt that. We feel to the maximum extent that we are capable about whatever we experience that we feel is deserving of the most emotion. Our feelings are characteristically not objective, essentially by definition.

The roundabout point I'm making here is that it is no surprise that we can believe the world today sucks while simultaneously believing that the world of the middle ages sucked, even if we don't believe they sucked equally or that today sucks only slightly less by comparison. The space for this seeming contradiction in our head comes from the fact that we evaluate world sucktitude by distinctly different measures--the present with emotions, the past with factoids. Our brains dispense with this cognitive dissonance by categorizing the past and present differently.

This isn't an unfounded hypothesis, and it's not untestable. To be sure, I suspect that the vast majority of students who come out of my medieval history class will do so saying, "Actually, it wasn't a dark age at all, because blah blah blah." But I suspect that while my fellow classmates may come away from the philosophy course knowing a good deal more about Locke, Hobbes, and Marx, few will leave it saying, "Actually, now doesn't suck quite so bad, because blah blah blah."

And I think this is a problem. I think we as humans too often substitute our feelings about a subject for objective evaluations of a subject. I say this from experience. To make this blog uncomfortably personal again, this is one of the big lessons I have learned in therapy: that the way I feel about something is not necessarily indicative of the way something actually is.

For a very long time, I believed I was incapable of change. This belief came from me having experienced superficially similar feelings for the last 10 or 15 years: loneliness, despair, self-hate, etc. And if my feelings were the same, that must mean I was the same, right? Well, no. I believed I could use my feelings about myself as an accurate measure of myself, but that belief was wrong (and kept me from combating my depression for a long time).

I suspect that most people fall prey to the same kinds of erroneous beliefs. (Most people don't go through a good chunk of their life depressed, though, and I suspect the difference there is that most people's erroneous, feeling-based beliefs aren't negative and inwardly focused.) And a good deal of psychological research backs me up on this. The beliefs we hold most strongly are not the ones backed up by the most evidence, but those associated with the strongest feelings.

What's the solution? Well, we could just make sure we brainwash people to believe the right things, but I don't think that tackles the central issue. I think a short-term solution is teaching people to be more critical of their own beliefs from a very early age, teaching people not to accept blindly what they feel to be true, perhaps even teaching people to actively distrust that which they feel most strongly about. The long-term solution is to modify human nature so that we no longer make this substitution error, but I have a feeling that's crazy.

Wednesday, January 14, 2015

I think I think, therefore I might be.

StatCounter says I still get the occasional visitor. Sometimes, that visitor isn’t a robot! Anywho, that was quite a lengthy hiatus I went on there—the kind of hiatus where you’re not sure if the person is just taking a break or the person is gone forever. But here I am again, so I guess it was just a break. The last year has been kind of rough, and because of that blogging kind of fell by the wayside. I’d like to think things are picking up again, and I’d like to think blogging might be one of those things which gets picked up. So, to all my devoted sentient readers, here’s a post!

I should probably warn you beforehand that this post is going to involve some religion, a lot of philosophical stuff, some personal stories, and basically no physics. And it will probably be long. So, you know, continue at your own peril.

While I am not a big fan of labels, it would not be disingenuous to say that I fall roughly into the skeptical/science-y/non-religious camp. As a result, I have on occasion engaged in debates with those who are more or less diametrically opposed to me where it concerns the supernatural. An argument I often hear (and a common argument in the evil baby-eating fundie vs. evil baby-eating atheist brawl) is that scientists are guilty of hubris for daring to believe they can unravel the mysteries of god/the supernatural/the universe.

It is the height of arrogance, some believe, that we believe we can know how life or the universe began. I say life and universe here, because those are current unknowns in science. There’s a pretty good theory as to how life evolved, and a pretty good theory as to what the universe looked like ~14 billion years ago, but we cannot yet say with any certainty exactly how life got started in the first place or what (if anything) was happening more than 14 billion years ago.

But as I said, these are current unknowns. In the past, it might have been the height of arrogance to presume to know how the great diversity of life came to be, or how the planets moved about the heavens, or why the Earth sometimes shook and lightning split the sky. This moving goalpost is known as the god of the gaps. Much of what was once thought to be in the domain of the divine has yielded to scientific explanation, so that now supernatural causes can only be posited in current gaps in scientific understanding (unless you don’t go in for teleological arguments at all).

Now, I’m not going to spend much time directly refuting this kind of argument. Instead, I’d like to offer an alternative viewpoint as to what such an attitude entails. Neil deGrasse Tyson gives a lecture about what he sees as the problem of intelligent design, and he spends part of this lecture giving examples of otherwise great scientists (such as Newton) who, when confronted with a problem they could not solve, called upon the god of the gaps as a solution. What happens more frequently, however, is not that we fail to find a solution to a problem, but that we fail to imagine a solution and invoke the divine instead.

I claim that this attitude is a far more damning instance of hubris than the scientist who believes he can solve a difficult problem. In essence, this attitude says that if I cannot solve a problem, then no one can, that the problem is impossible to solve. If you ever find yourself lacking clear examples of arrogance, there you go.

Now, don’t get me wrong, there are definitely arrogant scientists out there, and I have no desire to defend such arrogance (as you will see shortly). But I do believe the attitude of science (in some Platonic sense unplagued by the troubles of the real world) is not that science can unravel all problems and explain all mysteries, but that it’s worth it to try to do so.

And in the 400 years since we have institutionalized and made rigorous this can-do attitude, we seemed to have made some incredible progress. We have gone from galloping horses (~45 kph) being the fastest mode of transportation to space probes hurtling out of the solar system (~60,000 kph). We’ve gone from infant and childhood mortality being so prevalent that average life expectancy was 30-40 years, to now, where you can reasonably expect, even at birth, to live to 70 years. Yadda yadda; science is great; you’re reading this on your magic, world-connected box.

Here’s where I stop bashing religion and transition to a personal anecdote because science says convincing you of something by appealing to your emotions is more effective than appealing to your reason. Also, I’m trying to make a more general point.

In my preface above, I mentioned that the last year had been rough. Now, as some of you (and the Google robots) know, I have been battling bouts of depression for something like 15 years. For much of that time, I resisted treatment. I refused to talk about it, I conveniently forgot to refill my antidepressants, and I believed my therapists were incapable of helping me.

Why did I engage in all of these self-destructive behaviors? Because, despite having some pretty severe self-esteem issues, I was thoroughly convinced of my own genius. And because I had not managed to cure my depression with my own big brain, I came to believe that it was, in fact, impossible to cure my depression.

Sound familiar? This is basically the same hubris present in the god of the gaps argument. I don’t believe this is coincidental. My stubborn refusal to believe that anything could help me and the belief that as of yet unsolved problems are not even scientific stem from a common belief: that human reason is a pure and perfect pinnacle of intelligence. It might not be entirely obvious that this is so, so let’s explore the notion a bit.

It’s hard to find solid data on this issue, but I think it’s fair to say that most people believe in some notion of free will. There is the dualist perspective employed by many religions, which says that we have a body and a soul, that the body is bound by physical laws, but that the soul is free to make choices. There are also notions, probably more common now than they used to be, that the universe is deterministic but for the human mind. We might not necessarily have a soul, but we have some essence isolated from external factors, such that we can always choose otherwise even in limited circumstances. I will concede that most people probably don’t sit around contemplating the issue of free will (once they’ve graduated from their pot-smoking college days), but even so, they hold to the idea that people are responsible for their actions and that we can judge them based on said actions. To believe thusly (except in a purely pragmatic sense intended to keep society running) ultimately means you believe there is some person-centered force at work beyond the clockwork laws of the universe.

And that’s the key notion. There is the universe, and then there’s you. It’s hard to escape this perspective. After all, we peer out into the universe through our eyes. Everything that we perceive falls into us. And without the aid of mind-altering substances, we firmly believe in a sense of self that is distinct from the world around us. And what constitutes this sense of self, what makes it feel real, are the thoughts that go running through our heads. There is a universe of stuff out there, and there is a universe of thoughts in here.

Thinking, then, is a special and uniquely human act. Perhaps some other animals engage in it as well, we think, but they don’t do it like we do it. Historically, the capacity to reason has been thought of as one of the defining characteristics of the human animal. We believe we are capable of cleanly deducing the truth given the facts, or making the right decision given all the evidence. This is why naive economic models mostly assume rational agents, and why we generally trust that juries can work.

And this is the connection to the hubris I described above. While we are certainly not blind to the idea that emotions can influence our thinking, we believe that if we are able to control our emotions, the human brain—isolated as it is from the rest of the universe—will arrive at the correct answer given the correct data. If we apply reason, we will be correct. Reason is a binary force that is either on or off. Thus, if we use our reason but we cannot find an answer, the only possible explanation is that there is no answer.

Unfortunately, scientific research over the last half century or so has shown that humans are actually spectacularly bad at rational thinking. We can do it, yes, but only just barely. We may even be unique in our capacity to do it at all (probably not), but it is not a trait honed to perfection by evolution. For one, evolution tends not to hone things to perfection. And two, evolution hasn’t had much time to hone our reason at all.

So we can think, but our thinking is plagued by a whole host of cognitive biases that distort our thinking away from what would be purely rational. There are two things which are important to note here, though. One is that these cognitive biases are not necessarily emotional influences getting in the way of our perfect reasoning. Instead, it’s better to think of them as illusions of thought. And that’s the second point. Illusions in general don’t represent some failure of evolution to make a module (sight, sound, reason) perfect, but evolution developing a heuristic that works most of the time toward the end of ensuring survival and reproduction. Cognitive biases are not necessarily bad; they’re just ways of thinking geared toward an end other than perfect rationality.

If you look at the capacity to reason as an evolved module like any other, it becomes clear that there is no reason to expect it to function “perfectly.” The rest of our modules are far from perfect, after all, because they don't have to be. Our sight, for example, does not reproduce in our mind’s eye some direct analog of the world out there. We see only a tiny fraction of the electromagnetic spectrum, our perception of the colors of objects is altered by nearby objects, we have a blind spot in the middle of our vision that our brains simply fill in, etc.

Our sight is still enormously useful, both in keeping us alive and in giving us some picture of the real world, but ultimately, there are feats our eyes cannot accomplish. No matter how hard we look at an object, we will never see it in radio waves. Some illusions will always fool us. Just the same, there is no reason to believe that the evolved module of reason is perfectly capable of the task of reason. There are limits to what we can accomplish with our own thoughts, for the simple reason that thoughts exist on a biological substrate and not in some dualistic netherworld.

Possibly the most glaring fault in our vision, however, is our belief that it accurately and completely reflects the real world. And this is a common theme in human consciousness. Despite the patchy and inconsistent data our senses actually relay to the brain, despite how inaccurately our memories correspond to history, despite how biased our thinking can be, our brain is designed to convey a sense of consistency and definiteness in the world it creates for us, and we trust it.

This trust is dangerous. It means we can fool ourselves into believing problems are intractable. It means we can fool ourselves into believing we can think our way out of any problem. It means that if something works for one person, we'll believe it should work for every person. It means we can condemn people to death on the “strength” of eyewitness testimony. It means we can feel comfortable declaring people evil because we’re sure we’re capable of choosing to be good.

Some say scientists are arrogant. And some scientists are, of course. But the story of science is not about unparalleled geniuses using the hammer of their perfect intellect to crush the insignificant nails of ignorance (this is a terrible metaphor, but I laughed while writing it, so you’re stuck with it).

The story of science as I see it is of believing that it’s worth it to try to figure things out. From that stance alone we admit our own ignorance. The world might not be only what it appears to be, so let’s try to figure out what it actually is. Our brains might be fallible, so let’s try to account for those failures when we seek answers. We might be ill-equipped to solve some mystery on our own, so let's share our findings and see what others discover, too.

Science done right is the deconstruction of hubris.