Wednesday, January 28, 2015

The Dark Ages Versus the Age of Discontent

Because I am somewhat of a "non-traditional student," my class schedule this semester would not immediately lead one to believe that I am an astronomy major. My classes are:

Astr 121 - Introductory Astrophysics II - Stars and Beyond

Phys 373 - Mathematical Methods for Physics II

Phil 233 - Philosophy in Literature

Phil 245 - Political and Social Philosophy I

Hist 111 - The Medieval World

You'll note the surprising dearth of astronomy courses. There are reasons for this, but detailing said reasons would make for a damn boring blog post, so I'm going to talk about something else (hopefully less boring) instead. (Worry not--the next two semesters will be as dense with astronomy courses as neutrons stars are with, uh, neutrons.)

Instead this post is about an interesting juxtaposition of beliefs I encountered in my fellow students. Both my medieval history and political philosophy instructors began class the first day by directly challenging the beliefs held by their students about a relevant subject (a surefire way not to convince the students of anything).

You can probably guess the common misconception in medieval history: the middle ages were a stagnant "dark age" where European savages meekly held onto life, all the while having any hint of progress quashed by the oppressive, aggressive ignorance of the Church.

So, that's false, of course. And I'm sure I'll learn a much more nuanced notion of what the medieval world was like during the next 13 weeks. But the idea that the middle ages were "dark" is a pretty commonly held belief, or at the very least the idea that people believe the middle ages were "dark" is a pretty commonly held belief.

My political philosophy instructor came at us from a different angle, however. He began the first lecture by presenting us with the idea that, compared to the societies in which the famous philosophers we're going to read about lived, we basically live in a utopia. Violence worldwide is lower than it's ever been at any time in history. GDP is leaps and bounds greater than it ever was in history. Yadda yadda.

This notion received a much cooler reception than the notion that the medieval period was not a dark age. I'll get to the difference between these two reactions in a moment, but the interesting point to me is that the default position to both ideas is one of disbelief. People do not believe the middle ages weren't hopelessly terrible; people do not believe now is (relatively) awesome.

At first blush, these two points of view would seem to contradict. How can we simultaneously believe that the medieval ages were terrible but that now is not terrible in comparison? We might believe that both periods were equally terrible, but that's not the general view held by my fellow students. To make the argument for the "dark ages," many pointed to the religious oppression that used to exist, but does no longer; to the authoritative regimes that used to rule, but do no longer; to the diseases that used to be so deadly, but are no longer. So they do not believe that each period is equally terrible.

Another possibility is that my fellow students have a nuanced position: that things used to suck really badly, but now suck only somewhat badly. But again, I don't believe this matches the professed opinions of my classmates. They were aggressively opposed to the notion that things don't suck now. They offered relatively little opposition to my history professor's arguments but jumped on everything my philosophy instructor said. Clearly, my fellow classmates feel very strongly that things aren't much better now. And that, I suspect, is the difference.

Daniel Kahneman and other psychologists have argued that when we are asked a difficult question, our brains take a shortcut by providing an answer to an easier question. We mentally change the question we are being asked to something that has a readily available answer.

So if the question we are asked is, "How good is civilization now compared to the way it used to be?", that's a relatively difficult question to answer. 7, maybe? A much easier question to answer, and one that is vaguely similar, is, "How do we feel about civilization now?" And we all have readily available opinions on the current state of things.

One of the reasons why the second question is easier to answer is because it doesn't ask us to evaluate the past. We haven't been to the ancient past; we don't know what it was really like. Unless we ourselves are historians, we're unlikely to have strong opinions about the past. And without strong opinions, we don't have easy access to "data" on what the past was like.

The other reason why the second question is easier to answer is because, of course, we have "data" about it. We don't necessarily have good statistics about what society today is like (although we might, and college students taking government classes and reading their preferred websites are likely to think they do), but we do have feelings about the present. I don't want to get particularly political here, but we're all inundated with news everyday telling us how terrible things are now, about racist cops, or the rape culture on college campuses, or the decaying moral fabric that holds America together, etc.

I have no desire to deny there are bad things now, that racism and sexism still exist, that our privacies are being eroded, that morally ambiguous wars are being waged, that much of the world still lives in abject poverty, or anything like that. Modern problems are real and worth dealing with, no doubt. What I'm getting at, however, is how those problems make us feel. They make us feel terrible, and we confuse that terrible feeling with what actually is.

Few of us feel terrible about the atrocities committed one hundred or one thousand years ago, however more terrible they may have been than atrocities committed now. You can argue, of course, that there's no reason to feel terrible about the past, because there's nothing we can do about it. We can change the world now, so our emotions do us some good in motivating that change. (The counter to this is something like the Holocaust Museum, which makes us feel absolutely awful on purpose so that we ensure nothing like it ever happens again.)

That's a valid argument, but it misses some nuance. Let's say that the world today is only half as bad as it was a hundred years ago, by some measure of Objective World Awesomeness (OWA). Do we think, then, that the feelings people had about the world a hundred years ago were twice as powerful as the feelings we have today? I sincerely doubt that. We feel to the maximum extent that we are capable about whatever we experience that we feel is deserving of the most emotion. Our feelings are characteristically not objective, essentially by definition.

The roundabout point I'm making here is that it is no surprise that we can believe the world today sucks while simultaneously believing that the world of the middle ages sucked, even if we don't believe they sucked equally or that today sucks only slightly less by comparison. The space for this seeming contradiction in our head comes from the fact that we evaluate world sucktitude by distinctly different measures--the present with emotions, the past with factoids. Our brains dispense with this cognitive dissonance by categorizing the past and present differently.

This isn't an unfounded hypothesis, and it's not untestable. To be sure, I suspect that the vast majority of students who come out of my medieval history class will do so saying, "Actually, it wasn't a dark age at all, because blah blah blah." But I suspect that while my fellow classmates may come away from the philosophy course knowing a good deal more about Locke, Hobbes, and Marx, few will leave it saying, "Actually, now doesn't suck quite so bad, because blah blah blah."

And I think this is a problem. I think we as humans too often substitute our feelings about a subject for objective evaluations of a subject. I say this from experience. To make this blog uncomfortably personal again, this is one of the big lessons I have learned in therapy: that the way I feel about something is not necessarily indicative of the way something actually is.

For a very long time, I believed I was incapable of change. This belief came from me having experienced superficially similar feelings for the last 10 or 15 years: loneliness, despair, self-hate, etc. And if my feelings were the same, that must mean I was the same, right? Well, no. I believed I could use my feelings about myself as an accurate measure of myself, but that belief was wrong (and kept me from combating my depression for a long time).

I suspect that most people fall prey to the same kinds of erroneous beliefs. (Most people don't go through a good chunk of their life depressed, though, and I suspect the difference there is that most people's erroneous, feeling-based beliefs aren't negative and inwardly focused.) And a good deal of psychological research backs me up on this. The beliefs we hold most strongly are not the ones backed up by the most evidence, but those associated with the strongest feelings.

What's the solution? Well, we could just make sure we brainwash people to believe the right things, but I don't think that tackles the central issue. I think a short-term solution is teaching people to be more critical of their own beliefs from a very early age, teaching people not to accept blindly what they feel to be true, perhaps even teaching people to actively distrust that which they feel most strongly about. The long-term solution is to modify human nature so that we no longer make this substitution error, but I have a feeling that's crazy.

Wednesday, January 14, 2015

I think I think, therefore I might be.

StatCounter says I still get the occasional visitor. Sometimes, that visitor isn’t a robot! Anywho, that was quite a lengthy hiatus I went on there—the kind of hiatus where you’re not sure if the person is just taking a break or the person is gone forever. But here I am again, so I guess it was just a break. The last year has been kind of rough, and because of that blogging kind of fell by the wayside. I’d like to think things are picking up again, and I’d like to think blogging might be one of those things which gets picked up. So, to all my devoted sentient readers, here’s a post!

I should probably warn you beforehand that this post is going to involve some religion, a lot of philosophical stuff, some personal stories, and basically no physics. And it will probably be long. So, you know, continue at your own peril.

While I am not a big fan of labels, it would not be disingenuous to say that I fall roughly into the skeptical/science-y/non-religious camp. As a result, I have on occasion engaged in debates with those who are more or less diametrically opposed to me where it concerns the supernatural. An argument I often hear (and a common argument in the evil baby-eating fundie vs. evil baby-eating atheist brawl) is that scientists are guilty of hubris for daring to believe they can unravel the mysteries of god/the supernatural/the universe.

It is the height of arrogance, some believe, that we believe we can know how life or the universe began. I say life and universe here, because those are current unknowns in science. There’s a pretty good theory as to how life evolved, and a pretty good theory as to what the universe looked like ~14 billion years ago, but we cannot yet say with any certainty exactly how life got started in the first place or what (if anything) was happening more than 14 billion years ago.

But as I said, these are current unknowns. In the past, it might have been the height of arrogance to presume to know how the great diversity of life came to be, or how the planets moved about the heavens, or why the Earth sometimes shook and lightning split the sky. This moving goalpost is known as the god of the gaps. Much of what was once thought to be in the domain of the divine has yielded to scientific explanation, so that now supernatural causes can only be posited in current gaps in scientific understanding (unless you don’t go in for teleological arguments at all).

Now, I’m not going to spend much time directly refuting this kind of argument. Instead, I’d like to offer an alternative viewpoint as to what such an attitude entails. Neil deGrasse Tyson gives a lecture about what he sees as the problem of intelligent design, and he spends part of this lecture giving examples of otherwise great scientists (such as Newton) who, when confronted with a problem they could not solve, called upon the god of the gaps as a solution. What happens more frequently, however, is not that we fail to find a solution to a problem, but that we fail to imagine a solution and invoke the divine instead.

I claim that this attitude is a far more damning instance of hubris than the scientist who believes he can solve a difficult problem. In essence, this attitude says that if I cannot solve a problem, then no one can, that the problem is impossible to solve. If you ever find yourself lacking clear examples of arrogance, there you go.

Now, don’t get me wrong, there are definitely arrogant scientists out there, and I have no desire to defend such arrogance (as you will see shortly). But I do believe the attitude of science (in some Platonic sense unplagued by the troubles of the real world) is not that science can unravel all problems and explain all mysteries, but that it’s worth it to try to do so.

And in the 400 years since we have institutionalized and made rigorous this can-do attitude, we seemed to have made some incredible progress. We have gone from galloping horses (~45 kph) being the fastest mode of transportation to space probes hurtling out of the solar system (~60,000 kph). We’ve gone from infant and childhood mortality being so prevalent that average life expectancy was 30-40 years, to now, where you can reasonably expect, even at birth, to live to 70 years. Yadda yadda; science is great; you’re reading this on your magic, world-connected box.

Here’s where I stop bashing religion and transition to a personal anecdote because science says convincing you of something by appealing to your emotions is more effective than appealing to your reason. Also, I’m trying to make a more general point.

In my preface above, I mentioned that the last year had been rough. Now, as some of you (and the Google robots) know, I have been battling bouts of depression for something like 15 years. For much of that time, I resisted treatment. I refused to talk about it, I conveniently forgot to refill my antidepressants, and I believed my therapists were incapable of helping me.

Why did I engage in all of these self-destructive behaviors? Because, despite having some pretty severe self-esteem issues, I was thoroughly convinced of my own genius. And because I had not managed to cure my depression with my own big brain, I came to believe that it was, in fact, impossible to cure my depression.

Sound familiar? This is basically the same hubris present in the god of the gaps argument. I don’t believe this is coincidental. My stubborn refusal to believe that anything could help me and the belief that as of yet unsolved problems are not even scientific stem from a common belief: that human reason is a pure and perfect pinnacle of intelligence. It might not be entirely obvious that this is so, so let’s explore the notion a bit.

It’s hard to find solid data on this issue, but I think it’s fair to say that most people believe in some notion of free will. There is the dualist perspective employed by many religions, which says that we have a body and a soul, that the body is bound by physical laws, but that the soul is free to make choices. There are also notions, probably more common now than they used to be, that the universe is deterministic but for the human mind. We might not necessarily have a soul, but we have some essence isolated from external factors, such that we can always choose otherwise even in limited circumstances. I will concede that most people probably don’t sit around contemplating the issue of free will (once they’ve graduated from their pot-smoking college days), but even so, they hold to the idea that people are responsible for their actions and that we can judge them based on said actions. To believe thusly (except in a purely pragmatic sense intended to keep society running) ultimately means you believe there is some person-centered force at work beyond the clockwork laws of the universe.

And that’s the key notion. There is the universe, and then there’s you. It’s hard to escape this perspective. After all, we peer out into the universe through our eyes. Everything that we perceive falls into us. And without the aid of mind-altering substances, we firmly believe in a sense of self that is distinct from the world around us. And what constitutes this sense of self, what makes it feel real, are the thoughts that go running through our heads. There is a universe of stuff out there, and there is a universe of thoughts in here.

Thinking, then, is a special and uniquely human act. Perhaps some other animals engage in it as well, we think, but they don’t do it like we do it. Historically, the capacity to reason has been thought of as one of the defining characteristics of the human animal. We believe we are capable of cleanly deducing the truth given the facts, or making the right decision given all the evidence. This is why naive economic models mostly assume rational agents, and why we generally trust that juries can work.

And this is the connection to the hubris I described above. While we are certainly not blind to the idea that emotions can influence our thinking, we believe that if we are able to control our emotions, the human brain—isolated as it is from the rest of the universe—will arrive at the correct answer given the correct data. If we apply reason, we will be correct. Reason is a binary force that is either on or off. Thus, if we use our reason but we cannot find an answer, the only possible explanation is that there is no answer.

Unfortunately, scientific research over the last half century or so has shown that humans are actually spectacularly bad at rational thinking. We can do it, yes, but only just barely. We may even be unique in our capacity to do it at all (probably not), but it is not a trait honed to perfection by evolution. For one, evolution tends not to hone things to perfection. And two, evolution hasn’t had much time to hone our reason at all.

So we can think, but our thinking is plagued by a whole host of cognitive biases that distort our thinking away from what would be purely rational. There are two things which are important to note here, though. One is that these cognitive biases are not necessarily emotional influences getting in the way of our perfect reasoning. Instead, it’s better to think of them as illusions of thought. And that’s the second point. Illusions in general don’t represent some failure of evolution to make a module (sight, sound, reason) perfect, but evolution developing a heuristic that works most of the time toward the end of ensuring survival and reproduction. Cognitive biases are not necessarily bad; they’re just ways of thinking geared toward an end other than perfect rationality.

If you look at the capacity to reason as an evolved module like any other, it becomes clear that there is no reason to expect it to function “perfectly.” The rest of our modules are far from perfect, after all, because they don't have to be. Our sight, for example, does not reproduce in our mind’s eye some direct analog of the world out there. We see only a tiny fraction of the electromagnetic spectrum, our perception of the colors of objects is altered by nearby objects, we have a blind spot in the middle of our vision that our brains simply fill in, etc.

Our sight is still enormously useful, both in keeping us alive and in giving us some picture of the real world, but ultimately, there are feats our eyes cannot accomplish. No matter how hard we look at an object, we will never see it in radio waves. Some illusions will always fool us. Just the same, there is no reason to believe that the evolved module of reason is perfectly capable of the task of reason. There are limits to what we can accomplish with our own thoughts, for the simple reason that thoughts exist on a biological substrate and not in some dualistic netherworld.

Possibly the most glaring fault in our vision, however, is our belief that it accurately and completely reflects the real world. And this is a common theme in human consciousness. Despite the patchy and inconsistent data our senses actually relay to the brain, despite how inaccurately our memories correspond to history, despite how biased our thinking can be, our brain is designed to convey a sense of consistency and definiteness in the world it creates for us, and we trust it.

This trust is dangerous. It means we can fool ourselves into believing problems are intractable. It means we can fool ourselves into believing we can think our way out of any problem. It means that if something works for one person, we'll believe it should work for every person. It means we can condemn people to death on the “strength” of eyewitness testimony. It means we can feel comfortable declaring people evil because we’re sure we’re capable of choosing to be good.

Some say scientists are arrogant. And some scientists are, of course. But the story of science is not about unparalleled geniuses using the hammer of their perfect intellect to crush the insignificant nails of ignorance (this is a terrible metaphor, but I laughed while writing it, so you’re stuck with it).

The story of science as I see it is of believing that it’s worth it to try to figure things out. From that stance alone we admit our own ignorance. The world might not be only what it appears to be, so let’s try to figure out what it actually is. Our brains might be fallible, so let’s try to account for those failures when we seek answers. We might be ill-equipped to solve some mystery on our own, so let's share our findings and see what others discover, too.

Science done right is the deconstruction of hubris.

Friday, November 22, 2013

“We won’t go into the details.”

I’m pretty sure I’ve talked a lot more about my math class than my physics class this semester. With the semester winding to a close, I don’t have much time life to even the score. But here’s an attempt. The reason for the relative silence on the subject of physics is, however, math-related.

As I mentioned in an earlier post, the third semester of intro physics is usually referred to as modern physics. At my community college, it’s “Waves, Optics, and Modern Physics.” The course covers a lot of disparate material. While the first half of the semester was pretty much all optics, the second half has been the modern physics component.

What does “modern physics” mean? Well, looking at the syllabus, it means a 7-week span in which we talked about relativity, quantum mechanics, atomic physics, and nuclear physics. All of these are entire fields unto themselves, but we spent no more than a week or two on each topic.

I predicted during the summer that I wouldn’t mind the abbreviated nature of the course, but that prediction turned out to be wrong. Here’s why.

The first two semesters of physics at my community college were, while not perfect by any stretch of the imagination, revelatory in comparison to the third semester. I enjoyed them a great deal because physical insight arose from mathematical foundations. With calculus, much of introductory physics becomes clear.

You can sit down and derive the equations of kinematics that govern how objects move in space. You can write integrals that tell you how charges behave next to particular surfaces. Rather than being told to plug and chug through a series of equations, you’re asked to use your knowledge of calculus to come up with ways to solve problems.

This is in stark contrast to what I remember of high school physics. There, we were given formulas plucked from textbooks and told to use them in a variety of word problems. Kinetic energy was 1/2mv2, because science. There was no physical insight to be gained, because there was no deeper understanding of the math behind the physics.

And so it is in modern physics as well. The mantra of my physics textbook has become, “We won’t go into the details.” Where before the textbook might say, “We leave the details as an exercise for the reader,” now there is no expectation that we could possibly comprehend the details. The math is “fairly complex,” we are told, but here are some formulas we can use in carefully circumscribed problems.

It happened during the optics unit, too. Light, when acting as a wave, reflects and refracts and diffracts. Why? Well, if you use a principle with no physical basis, you can derive some of the behaviors that light exhibits. But why would you use such a principle? Because you can derive some of the behaviors that light exhibits, of course.

But it’s much worse in modern physics. The foundation of quantum mechanics is the Schrödinger equation, which is a partial differential equation that treats particles as waves. Solutions to this equation are functions called Ψ (psi). What is Ψ? Well, it’s a function that, with some inputs, produces a complex number. Complex numbers have no physical meaning, however. For example, what would it mean to be the square root of negative one meters away from someone? Exactly.

So to get something useful out of Ψ, you have to square it. Doing so gives you the probability of finding a particle in some particular place or state. Why? Because you can’t be the square root of negative one meters away from someone, that’s why. The textbook draws a parallel between Ψ and the photon picture of diffraction, in which the square of something also represents a probability, but gives us no mathematical reason to believe this. Our professor didn’t even try and was in fact quite flippant about the hand-waving nature of the whole operation.

If you stick a particle (like an electron) inside of a box (like an atom), quantum mechanics and the Schrödinger equation tell you that the electron can only exist at specific energy levels. How do we find those energy levels? (This is the essence of atomic physics and chemistry, by the way.) Well, it involves “solving a transcendental equation by numerical approximation.” Great, let’s get started! “We won’t go into the details,” the textbook continues. Oh, I see.

Later, the textbook talks about quantum tunneling, the strange phenomenon by which particles on one side of a barrier can suddenly appear on the other side. How does this work? Well, it turns out the math is “fairly involved.” Oh, I see.

This kind of treatment goes on for much of the text.

Modern physics treats us as if we are high school students again. Explanations are either entirely absent or sketchy at best. Math is handed down on high in the form of equations to be used when needed. Insight is nowhere to be found.

Unfortunately, there might not be a great solution to this frustrating conundrum. While the basics of kinematics and electromagnetism can be understood with a couple semesters of calculus, modern physics seems to require a stronger mathematical foundation. But you can’t very well tell students to get back to the physics after a couple more years of math. That’s a surefire way to lose your students’ interest.

So we’re left with a primer course, where our appetites are whetted to the extent that our rudimentary tools allow. My interest in physics has not been stimulated, however. I’m no less interested than I was before, but what’s really on my mind is the math. More than the physics, I want to know the math behind it. No, I’m not saying I want to be a mathematician now. I’m just saying that I can’t be a physicist without being a little bit a mathematician.

Thursday, November 14, 2013

Complexification


This post may seem a little out there, but that might be the point.

Last week in differential equations we learned about a process our textbook called complexification. (You can go ahead and google that, but near as I can tell what you’ll find is only vaguely related to what my textbook is talking about.) Complexification is a way to take a differential equation that looks like it’s about sines and cosines and instead make it about complex exponentials. What does that mean?

Well, I think most people know a little bit about sine and cosine functions. At the very least, I think most people know what a sine wave looks like.

Shout out to Wikipedia.
Such a wave is produced by a function that looks something like f(x) = sin(x). Sine and cosine come from relationships between triangles and circles, but they can be used to model periodic, fluctuating motion. For example, the way in which alternating current goes back and forth between positive and negative is sinusoidal.

On the other hand, exponential functions don’t seem at all related. Exponential functions look something like f(x) = ex, and their graphs have shapes such as this:

Thanks again, Wikipedia.

Exponential functions are used to model systems such as population growth or the spread of a disease. These are systems where growth starts out small, but as the quantity being measured grows larger, so too does the rate of growth.

Now, at first blush there doesn’t appear to be a lot of common ground between sine functions and exponential functions. But it turns out there is, if you throw in complex numbers. What’s a complex number? It’s a number that includes i, the imaginary unit, which is defined to be the square root of -1. You may have heard of this before, or you may have only heard that you can’t take the square root of a negative number. Well, you can: you just call it i.

So what’s the connection? The connection is Euler’s formula, which looks like this:

eix = cos(x) + isin(x).

Explaining why this formula is true turns out to be very complicated and a bit beyond what I can do. So just trust me on this one. (Or look it up yourself and try to figure it out.) Regardless, by complexifying, you have found a connection between exponentials and sinusoids.

How does that help with differential equations? The answer is that complexifying your differential equation can often make it simpler to solve.

Take the following differential equation:

d2y/dt2 + ky = cos(x).

This could be a model of an undamped harmonic oscillator with a sinusoidal forcing function. It’s not really important what that means, except to say you would guess (guessing happens a lot in differential equations) that the solution to this equation involves sinusoidal functions. The problem is, you don’t know if it will involve sine, cosine, or some combination of the two. You can figure it out, but it takes a lot of messy algebra.

A simpler way to do it is by complexifying. You can guess instead that the solution will involve complex exponentials, and you can justify this guess through Euler’s formula. After all, there is a plain old cosine just sitting around in Euler’s formula, implying that the solution to your equation could involve a term such as eix.

This idea of complexification got me thinking about the topic of explaining things to people. You see, I think I tend to do a bit of complexifying myself a lot of the time. Now, I don’t mean I throw complex numbers into the mix when I don’t technically have to; rather, I think I complexify by adding more than is necessary to my explanations of things. I do this instead of simplifying.

Why would I do this? After all, simplifying your explanation is going to make it easier for people to understand. Complexifying, by comparison, should make things harder to understand. But complexifying can also show connections that weren’t immediately obvious beforehand. I mean, we just saw that complexifying shows a connection between exponential functions and sinusoidal functions. Another example is Euler’s identity, which can be arrived at by performing some algebra on Euler’s formula. It looks like this:

eiπ + 1 = 0

This is considered by some to be one of the most astounding equations in all of mathematics. It elegantly connects five of the most important numbers we’ve discovered. Stare at it for awhile and take it in. Can that identity really be true? Can those numbers really be connected like that? Yup.

That, I think, is the benefit of complexifying: letting us see what is not immediately obvious.

It turns out last week was also Carl Sagan’s birthday. This generated some hubbub, with some praising the man and others wishing we would just stop talking about him already. Carl Sagan was admittedly before my time, but he has had an impact on me nonetheless. No, he didn’t inspire me to study science or pick up the telescope or anything like that. But I am rather fond of his pale blue dot speech, to the extent that there’s even a minor plot point about it in one of my half-finished novels.

Now, I read some rather interesting criticism of Sagan and his pale blue dot stuff on a blog I frequent. A commenter was of the opinion that Sagan always made science seem grandiose and inaccessible. That’s an interesting take, but I happen to disagree. Instead, I think we might be able to conclude that Sagan engaged in a bit of complexifying. No, he certainly didn’t make his material more difficult to understand than it had to be; he was a very gifted communicator. What he did do, however, and this is especially apparent with the pale blue dot, is make his material seem very big, very out there. You might say he added more than was necessary.

In doing so, he showed connections that were not immediately obvious. The whole point of his pale blue dot speech is that we are very small fish in a very big pond, and that this connects us to each other. The distances and differences between people are, relatively speaking, absolutely miniscule. From the outer reaches of the solar system, all of humanity is just a pixel.

But there are more connections to be made. Not only are all us connected to each other; we’re also connected to the universe itself. Because, you see, from the outer reaches of the solar system, we’re just a pixel next to other pixels, and those other pixels are planets, stars, and interstellar gases. We’re all stardust, as has been said.

This idea that seeing the world as a tiny speck is transformative has been called by some (or maybe just Frank White) the overview effect. Many astronauts have reported experiencing euphoria and awe as a result of this effect. But going to space is expensive, especially compared to listening to Carl Sagan.

So yeah, maybe Sagan was a bit grandiose in the way he doled out his science. But I don’t think that’s a bad thing. I just think it shows the connection between Sagan and my differential equations class.

Wednesday, November 6, 2013

For My Next Trick...

I will calculate the distance from the Earth to the Sun using nothing but the Earth’s temperature, the Sun’s temperature, the radius of the Sun, and the number 2. How will I perform such an amazing feat of mathematical manipulation? Magic (physics), of course. And as a magician (physics student), I am forbidden from revealing the secrets of my craft (except on tests and this blog).

During last night’s physics lecture, the professor discussed black-body radiation in the context of quantum mechanics. In physics, a black body is an idealized object that absorbs all electromagnetic radiation that hits it. Furthermore, if a black body exists at a constant temperature, then the radiation it emits is dependent on that temperature alone and no other characteristics.

According to classical physics, at smaller and smaller wavelengths of light, more and more radiation should be emitted from a black body. But it turns out this isn’t the case, and that at smaller wavelengths, the electromagnetic intensity drops off sharply. This discrepancy, called the ultraviolet catastrophe (because UV light is a short wavelength), remained a mystery for some time, until Planck came along and fixed things by introducing his eponymous constant.


Thanks, Wikipedia.

The fix was to say that light is only emitted in discrete, quantized chunks with energy proportional to frequency. Explaining why this works is a little tricky, but the gist is that there are fewer electrons at higher energies, which means fewer photons get released, which means a lower intensity than predicted by classical electromagnetism. Planck didn’t know most of those details, but his correction worked anyway and kind of began the quantum revolution.

But all of that is beside the point. If black bodies are idealized, then you may be wondering how predictions about black bodies came to be so different form the observational data. How do you observe an idealized object? It turns out that the Sun is a near perfect real-world analog of a black body, and by studying its electromagnetic radiation scientists were able to study black-body radiation.

Anywho, my professor drew some diagrams of the Sun up on the board during this discussion and then proposed to us the following question: Can you use the equations for black-body radiation to predict the distance from the Earth to the Sun? As it turns out, the answer is yes.

You see, a consequence of Planck’s law is the Stefan-Boltzmann law, which says that the intensity of light emitted by a black body is proportional to the 4th power of the object’s temperature. That is, if you know the temperature of a black body, you know how energetic it is. How does that help us?

Well, the Sun emits a relatively static amount of light across its surface. A small fraction of that light eventually hits the Earth. What fraction of light hits the Earth is related to the how far away the Earth is from the Sun. The farther away the Sun is, the less light reaches the Earth. This is pretty obvious. It’s why Mercury is so hot and Pluto so cold. (But it’s not why summer is hot or winter cold.) So if we know the temperature of the Sun and the temperature of the Earth, we should be able to figure out how far one is from the other.

To do so, we have to construct a ratio. That is, we have to figure out what fraction of the Sun’s energy reaches the Earth. The Sun emits a sphere of energy that expands radially outward at the speed of light. By the time this sphere reaches the Earth, it’s very big. Now, a circle with the diameter of the Earth intercepts this energy, and the rest passes us by. So the fraction of energy we get is the area of the Earth’s disc divided by the surface area of the Sun’s sphere of radiation at the point that it hits the Earth. Here’s a picture:

I made this!

So our ratio is this: Pe/Ps = Ae/As, where P is the power (energy per second) emitted by the body, Ae is the area of the Earth’s disc, and As is the surface area of the Sun’s energy when it reaches the Earth. One piece we’re missing from this is the Earth’s power. But we can get that just by approximating the Earth as a blackbody, too. This is less true than it is for the Sun, but it will serve our purposes nonetheless.

Okay, all we need now is the Stefan-Boltzmann law, which is I = σT4, where σ is a constant of proportionality that doesn’t actually matter here. What matters is that I, intensity, is power/area, and we’re looking for power. That means intensity times area equals power. So our ratio looks like this:

σTe44πre2 / σTs44πrs2 = πre2 / 4πd2

This is messy, but if you look closely, you’ll notice that a lot of those terms cancel out. When they do, we’re left with:

Te4 / Ts4rs2 = 1 / 4d2

Finally, d is out target variable. Solving for it, we get:

d = rsTs2 / 2Te2

Those variables are the radius of the Sun, the temperatures of the Sun and the Earth, and the number 2 (not a variable). Some googling tells me that the Sun’s surface temperature is 5778 K, the Earth’s surface temperature is 288 K, and the Sun’s radius is 696,342 km. If we plug those numbers into the above equation, out spits the answer: 1.40x1011 meters. As some of you may remember, the actual mean distance from the Earth to the Sun is 1.496x1011 meters, giving us an error of just 6.32%.

I’d say that’s pretty damn close. Why an error of 6%? Well, we approximated the Earth as a black body, but it’s actually warmer than it would be if it were a black body. So the average surface temperature we used is too high, thus making our answer too low. (There are other sources of error, too, but that’s probably the biggest one.)

There is one caveat to all this, however, which is that the calculation depends on the radius of the Sun. If you read the link above (which I recommend), you know, however, that we calculate the radius of the Sun based on the distance from the Earth to the Sun. But you can imagine that we know the radius of the Sun (to far less exact measurements) based solely on its observational characteristics. And in that case, we can still make the calculation.

Anywho, there’s your magic trick (physics problem) for the day. Enjoy.

Friday, November 1, 2013

National Hard Things Take Practice Month

Okay, it doesn’t have quite the same ring to it as National Novel Writing Month, but I’m saving my good words for my, well, novel writing. As some of you may know, November is NaNoWriMo, a worldwide event during which a bunch of people get together to (individually) write 50,000 words in 30 days. I’ve done it the last several years and I’m doing it this year, too. It’s hard, it’s fun, and it’s valuable.

As some of you may also know, Laura Miller, a writer for Salon, published a piece decrying NaNoWriMo. (Turns out she published that piece 3 years ago, but it's making the rounds now because NaNo is upon us. Bah, I'm still posting.) This made a lot of wrimos pretty upset, and I’ve seen some rather vitriolic criticism in response. Miller’s main point seems to be that there’s already enough crap out there and we don’t need to saturate the world with more of it. Moreover, she thinks we could all do a little more reading and a little less writing.

Well, as a NaNoWriMo participant and self-important blogger, I think I’m going to respond to Miller’s criticism. Of course, maybe that’s exactly what she wants. By writing this now, I’m not writing my NaNo novel. Dastardly plan, Laura Miller.


Now, I understand the angry response to Miller’s piece. I really do. It has a very “get off my lawn” feel to it that seems to miss the point that, for a lot of people, NaNo is just plain fun. But her two points aren’t terrible points, and I think they’re worth responding to in a civil, constructive way. So here goes.

As is obvious to anyone who’s read this blog, I quite like science. That’s what the blog is about, after all. In fact, I’ve been interested in science ever since I was a child. I read books about science, I had toy science kits, and I loved science fiction as a genre.

Yet this blog about science is not even a year old, and I’m writing this post as a freshly minted 28 year old. Why is that? Because up until about 2 years, I didn’t do anything with my interest in science. I took plenty of science and math classes in high school, but I mostly dithered around in them and didn’t, you guessed it, practice.

It wasn’t until 2 years ago that I sat down and decided it was time to reteach myself calculus. And how did I teach myself calculus? By giving myself homework. By doing that homework. By checking my answers and redoing problems until I got them right. And now I can do calculus. Now I can do linear algebra, differential equations, and physics. I’m no expert in these subjects, but I understand them to a degree because I’ve done them. I’ve practiced, just like you practice a sport.

The analogy here should be clear. You have to practice your sport, you have to practice your math, you have to practice your writing. Where some may disagree with this analogy is the idea that writing 50,000 words worth of drivel counts as practice. The answer is that it’s practicing one skill of writing, but not all writing skills. This follows from the analogy, too. Sometimes you practice free throws; other times you practice taking integrals. Each is a specific skill within a broad field, and each takes practice.

And as any writer knows, sometimes the most difficult part of writing is staring at a blank white page and trying to find some way to put some black on it. We all have ideas. We all have stories and characters in our heads. But exorcising those thoughts onto paper is a skill wholly unto itself, apart from the skills of grammar, narrative, and prose.

So it needs practice, and NaNoWriMo is that practice. If you’re a dedicated writer, however, then it follows that NaNo should not be your only practice. You have to practice the other skills, too. You have to write during the rest of the year, and you have to pay attention to grammar, narrative, and prose. But taking one month to practice one skill hardly seems a waste.

I’ve less to say about Miller’s second point, that we should read more and write less. This is a matter of opinion, I suppose. But I do have one comment about it. America is often criticized as being a nation of consumers who voraciously eat up every product put before them. We are asked only to choose between different brand names and to give no more thought to our decisions than which product to purchase.

Writing is a break from that. Rather than being a lazy, passive consumer of other people’s ideas, writing forces you to formulate and express your own ideas. Writing can be a tool of discovery, a way to expand the thinking space we all inhabit. Rather than selecting an imperfect match from a limited set of options, writing lets you make a choice that is precisely what you want it to be. You get to declare where you stand, or that you’re not taking a stand at all. You get to have a voice beyond simply punching a hole in a ballot.

You shouldn’t write instead of read, but you should write (or find some other way to creatively express your identity).

Wednesday, October 23, 2013

Tip o' the Lance

My weekend activities have provided me with ample blogging fodder of late. This past weekend I went to a local Renaissance Festival and, among other things, watched some real life jousting. That is, actual people got on actual horses and actually rammed lances into each other, sometimes with spectacular results.


I didn't take this picture. It's from the Renn Fest website. I just think my blog needs more visuals.
At one point a lance tip broke off on someone’s armor and went flying about 50 feet into the air. A friend wondered aloud what kind of force it would take to achieve that result, and here I am to do the math. This involves some physics from last year as well as much more complicated physics that I can’t do. You see, if a horse glided along the ground without intrinsic motive power, and were spherical, and of uniform density… but alas, horses are not cows.

Anywho, as to the flying lance tip, the physics is pretty easy. Now, I can’t say what force was acting on the lance. The difficulty is that, from a physics standpoint, the impact between the lance and the armor imparted momentum into the lance tip. Newton’s second law (in differential form) tells us that force is equal to the change in momentum over time. Thus, in order to calculate the force of the impact, I have to know how long the impact took. I could say it was a split second or an instant, but I’m looking for a little more precision than that.

Instead, however, I can tell you how much energy the lance tip had. It takes a certain amount of kinetic energy to fly 50 feet into the air. We’re gonna say the lance tip weighs 1 kg (probably an overestimate) and that it climbed 15 meters before falling down. In that case, our formula is e = mgh, where g is 9.8 m/s2 of gravitational acceleration, and we’re at about 150 joules of energy. This is roughly as much energy as a rifle bullet just exiting the muzzle. It also means the lance tip had an initial speed of about 17 m/s. I’m ignoring here, because I don’t have enough data, that the lance tip spun through the air—adding rotational energy to the mix—and that there was a sharp crack from the lance breaking—adding energy from sound.

But this doesn’t conclude our analysis. For starters, where did the 150 joules of energy come from? And is that all the energy of the impact? Let’s answer the second question first. Another pretty spectacular result of the jousting we witnessed was that one rider was unhorsed. We can model being unhorsed as moving at a certain speed and then having your speed brought to 0. Some googling tells me that a good estimate for the galloping speed of a horse is 10 m/s.

So the question is, how much work does it take to unhorse a knight? With armor, a knight probably weighs 100 kg. Traveling at 10 m/s, our kinetic energy formula tells us this knight possesses 5000 joules of energy, which means the impact must deliver 5000 joules of energy to stop the knight. This means there’s certainly enough energy to send a lance tip flying, and it also means that not all of the energy goes into the lance tip.

We can apply the same kinetic energy formula to our two horses, which each weigh about 1000 kg, and see that there’s something like 100 kj of energy between the two. Not all of that goes into the impact, however, because both horses keep going. This is where the horses not being idealized points hurts the analysis. Were that so, we might be able to tell how much energy is “absorbed” by the armor and lance.

There is one final piece of data we can look at. I estimate the list was 50 meters long. The knights met at the middle and, if they timed things properly, reached their maximum speeds at that point. Let’s also say that horses are mathematically simple and accelerate at a constant rate. One of the 4 basic kinematic equations tells us that vf2 = vi2 + 2ad. So this is 100 = 0 + 2*a*25, and solving for a gets us an acceleration of 2 m/s2. Newton’s second law, f=ma, means each horse was applying 2000 newtons of force to accelerate at that rate. 2000 N across 25 meters is 50,000 joules of work. It takes 5 seconds to accelerate to 10 m/s at 2 m/s2, so 50,000 joules / 5 seconds = 10,000 watts of power. What’s 10,000 watts? Well, let’s convert that to a more recognizable unit of measure. 10 kW comes out to about 13 horsepower, which is about 13 times as much power as a horse is supposed to have. Methinks James Watt has some explaining to do.

One other thought occurred to me during this analysis. Some googling tells me there are roughly 60 million horses in the world. If a horse can pump out 10 kW of energy, then we have roughly 600 GW of energy available from horses alone. Wikipedia says our average power consumption is 15 TW, which means the world’s horses running on treadmills could provide 4% of the energy requirements of the modern world. This isn’t strictly speaking true, because there will be losses due to entropy (and you can’t run a horse nonstop), but it’s in the right ballpark. Moral of the story? Don’t let anyone tell you that energy is scarce. The problem isn’t that there isn’t enough energy in the world; it’s that we don’t have the industry and infrastructure necessary to use all the energy at our disposal.