Wednesday, April 24, 2013

Subtle is the Blurb…

This post might be a little disjointed. I’m weaving together several different thoughts I’ve been having lately, but I think it’ll come together.

So I’ve been reading an Einstein biography, the classic ‘Subtle is the Lord…’ by Abraham Pais. While a little dated, it’s considered one of the more definitive accounts of Einstein’s scientific development.

Anywho, as accords such a renowned book, there are blurbs from other Nobel laureates on the back flap. When I picked the book up in the library, one in particular popped out at me:

I found it fascinating to read about the development of Einstein's ideas, particularly those connected with relativity. The various steps are clearly presented and the influence of other physicists and their reactions are described and coordinated, to provide a very readable narrative. – P.A.M. Dirac

I couldn’t help but laugh after reading this blurb, despite being in a library. The praise here is not exactly effusive. In fact, the whole thing is rather stilted. But more than that, Dirac mainly compliments the book’s clear steps and coordinated narrative, rather than any insight it provides into Einstein the man.

The reason I found the blurb so amusing is that Dirac is often mentioned by those who like to diagnose dead people with psychiatric disorders. Specifically, there are many who believe Dirac was autistic to some degree or another. Anecdotes to that effect abound.

I don’t really go in for that sort of post-mortem diagnosis, but surely Dirac could have sounded a little less like an emotionless robot in that blurb. (That said, I have an autistic friend who is one of the finest writers I know. It’s not the quality of the text I’m harping on but the focus on logic.)

Where this is leading is the notion that geniuses (of which Dirac most certainly was one) and scientists generally are viewed as being out-of-touch geeks who pay more attention to beakers than breasts. We only have to look at the stereotypes presented in the Big Bang Theory to see how common such a view is. But of course, there are equally extreme counterexamples. Feynman certainly got around, and Heisenberg was having an affair with a colleague’s wife when he formulated the first good model of QM.

The truth is, as it always is, somewhere in the middle. Scientists are just people, as Chad Orzel frequently points out. The most significant difference between scientists and non-scientists is that scientists do science for a living. That’s really all there is to it. There are certainly correlations that come with choosing science for a career, but no hard and fast rules. The closest I’ve found to a universal trait amongst scientists is that they (or at least the Scottish ones) realize that humans are prone to error, and that the only way to account for that is rigorous application of the scientific method. But even those scientists who think this way screw up, too, because scientists are humans.

Which brings us back to Einstein. The basic outline you see in every account of Einstein’s life is that in 1905 he began two revolutions: one quantum and one relativistic. Eleven years later he completed relativity by introducing the general theory of relativity. The rest of his life, however, was spent on quantum theory, a theory with which he was never satisfied. The quote everyone brings up here is, “I, at any rate, am convinced that He does not throw dice.”

Einstein was dissatisfied with the intrinsically probabilistic nature of quantum mechanics and argued throughout his life that quantum theory had to be incomplete. His most cogent critique of the theory came in the form of the EPR paper. In it, Einstein, Podolsky, and Rosen showed that two particles could be become entangled such that knowing the state of one particle instantly told you the state of the second particle, no matter how far away it was, without having to measure it. The two solutions to this apparent paradox were faster-than-light communication or hidden variables. Einstein and others believed the latter, that there was information about a system that we just didn’t know how to measure yet, and that this meant quantum theory was incomplete.

But Einstein was wrong. Decades later, John Stewart Bell showed that reality had to behave in a particular way if there were local hidden variables, and in a different way if quantum systems were non-local. Numerous experiments were carried out, and they confirmed that local hidden variable theories were incompatible with reality. Quantum mechanics, to the extent that it described the probabilistic behavior of quantum systems, was a complete theory.

(I should add that I have only a very surface level understanding of this stuff. I know that quantum entanglement has been carried out in labs. I also know that there are theoretical loopholes to Bell’s theorem that might vindicate Einstein but that most scientists think will not. I don’t really know the math and science behind this debate. Ask me again in a few years.)

Einstein was dead by the time all this happened, so we cannot know how he would have reacted to being proven wrong, but for as long as he lived he stubbornly refused to accept the reality of quantum mechanics. Despite it being an extraordinarily precise and well-tested theory, Einstein objected to it on philosophical grounds.

How could Einstein, a brilliant and revolutionary thinker who proposed that light waves were particles, that time could be stretched and space curved, fail to imagine that reality might be a bit random? Because he was human. And humans make mistakes.

Yet despite Einstein’s brilliance, despite his celebrity and his authority, most scientists eventually rejected Einstein’s notions about quantum mechanics and came to accept the non-local ramifications of the theory. They didn’t do so because they were smarter than Einstein or better scientists; they did so because no one individual is responsible for this thing we call science. Science is a process, a methodical and ruthless tool for separating fact from fiction, and it has proven enormously successful.

Individual scientists can screw up, entire generations can dogmatically subscribe to an incorrect theory, and charlatans can purposefully promote bad science, but science carries on. Eventually, the scientific method works itself out. This process we have discovered, of rigorously testing theory against observations, is perhaps the closest we humans have come to transcending our biological limits.

For reasons I don’t really want to get into right now, I can think of nothing more important for us to do. But the essence of it is that I believe becoming something greater than ourselves, greater than the sum of our parts, is the only way we can truly understand the universe. And that’s why I’ve decided to become a scientist.

Tuesday, April 23, 2013

I have seen the ∂2E(x,t)/∂x2 = µ0ε02E(x,t)/∂t2!

(There's some vector notation that I wasn't able to figure out how to get into a title. And that's an exclamation point, not a factorial.)
So, I'm reading ahead in my physics textbook, and I've reached the culmination of all this electricity and magnetism stuff. That equation up there, a nice little second-order partial differential equation, is ostensibly the pinnacle of 19th century physics and Maxwell's greatest contribution. (It's my understanding, however, that Maxwell didn't actually use that notation, and that there are plenty of other possibly more useful ways to write the equation. Nevertheless, that's my textbook's presentation.)
So what does that equation say? Well, a more or less literal translation says that the way in which an electric field is distributed through space is related to the way in which an electric field is distributed through time by the constants µ0 and ε0. This relationship arises from Maxwell's equations. The ones that are most relevant here are Faraday's Law and Ampere's Law.
Faraday's Law tells us that a changing magnetic field induces an electric field. I mentioned that in this post. The most frequent application of this law is in, well, almost every method of power generation we have. Some process (burning coal, burning gas, burning uranium) causes water to boil, and that water spins a turbine connected to a magnet, and that magnet's magnetic field moves through space, which sets up an electric field (and a corresponding current) in some conveniently placed wires. The faster the magnetic field changes, the stronger the current. But as soon as the magnetic field stops changing, the current dissipates. It is only while the field is changing that an electric field is generated.
Ampere's Law, in this context, tells us something similar. It says that a changing electric field, multiplied by µ0 and ε0, produces a magnetic field.
I suspect you can see the symmetry here. A changing electric field produces a magnetic field, and a changing magnetic field produces an electric field. If the rate at which an electric field changes is increasing, then the magnetic field it produces will also be increasing. And a magnetic field that is increasing in strength will produce an electric field that increases in strength in a direction opposite to the first electric field, which will tend to diminish the strength of the first electric field. This symmetry creates a sort of back and forth seesaw effect between electric and magnetic fields.
The way this connects back to the equation of the title is that the rate at which a rate is changing is known as a second derivative. The most common example is acceleration. Velocity is the rate at which position changes. Acceleration is the rate at which velocity changes, or the rate at which the rate at which position changes. And as you saw, an "accelerating" electric field produces a changing magnetic field, and vice versa. The 2 in ∂2E(x,t)/∂t2 means we're talking about second derivatives.
Okay, what's the point of all that? The point is that, traditionally, an electric field is set up by charged particles. An electron sitting by itself creates an electric field that extends radially beyond it. And magnetic fields are usually caused by moving charges. An electron flying off by itself at a constant speed will have a magnetic field encircling it. But Maxwell's equations say that you can get an electric field just by shaking a magnetic field around, and you can get a magnetic field just by shaking an electric field around. You don't need any charges at all (beyond an initial one to set up whichever field comes first). Electric and magnetic fields sustain each other, giving rise to electromagnetism.
On a basic level, I knew this beforehand. I didn't know the details or the math, however. But what really made the concept click for me was a discussion in an earlier chapter about LC circuits--that is, circuits composed of inductors and capacitors. I already discussed what a capacitor does in my last post, but an inductor is something new. An inductor is a circuit element that takes advantage of Faraday's Law in order to modulate the current in a circuit.
So, a current is just a bunch of moving charges, which means that all currents create magnetic fields. But when the current changes, as it does in AC circuits, you set up a changing magnetic field, which in turn creates an electric field that opposes the current change. The faster the current changes, the stronger the resultant magnetic and electric fields are. In essence, energy is being taken out of the current and put into the magnetic field. An inductor is an element designed to maximize the energy pumped into the magnetic field. This sounds a lot like a capacitor, where energy is being taken out of a current and stored in an electric field.
What happens when you put these two together? Well, as we saw, when you discharge a capacitor, it expels its electric energy very quickly at first and then slows down. But an inductor opposes current change, so the sharp increase from zero current is curbed by the creation of a strong magnetic field in the inductor. The effect is to take the energy stored in the electric field and place it into the magnetic field.
Once the capacitor is fully discharged, the current should stop, but again, inductors oppose current change. So instead, the inductor dissipates the energy from its magnetic field to increase the flow of current. The capacitor, down to zero charge, now begins to charge negatively. That is, it builds up electrons on the opposite plate. The energy from the magnetic field is transferred back to the electric field. With no resistance, this oscillation continues indefinitely, trading energy between the electric and magnetic fields of the circuit. The rate at which this happens depends on the properties of the circuit, but the general shape of the interaction is going to be a sine wave.
Yes, that's right, when you move energy between electric and magnetic fields, you get a wave. One might even be tempted to call it an electromagnetic wave. In fact, the equation in the title takes the form of the general "wave equation" that can also apply to the wave-like motion of a spring or a sound wave or a whole host of other physical phenomena. The general equation looks like this: ∂2u/∂x2 = 1/v2 * ∂2u/∂t2, where u is some wave-like phenomenon, and v is the speed at which that wave propagates.
If that's the case, then µ0ε0 takes the place of 1/v2 in an electromagnetic wave. Carrying out the algebra means that the speed of an electromagnetic wave should be 1 divided by the square root of µ0ε0. µ0 = 4πx10-7 and ε0 = 8.85x10-12. Multiply these and your product is 1.11x10-17.Take the square root of that and you get 3.33x10-9. Find the reciprocal of that and you arrive at 2.99x108. This, of course, is the speed of light in vacuum. Light is a self-propagating wave of electromagnetic energy.
I have seen the light.

(There's a bit of cheating here. Those constants are now defined in relation to the speed of light, so of course the algebra works. But way back in the 19th century, εwas a proportionality constant that described the ability of the vacuum to act as a capacitor, and µwas a proportionality constant related to the magnetic force between two lengths of current-carrying wires a meter apart. And it was thought of as a rather interesting coincidence that putting those two constants together got you the speed of light. Maxwell set 'em all straight.)

Friday, April 19, 2013

Capacitors fully charged, Captain.

(I'm going to try to post about once a week from now on. We'll see.)
On Wednesday we did a fun lab that involved charging capacitors in an RC circuit. I only expect to hear about charging the capacitor banks in Star Trek and other such shows, so this was a neat experience.
The point of the lab was to observe the exponential decay of a discharging capacitor. A capacitor is a circuit element that takes the energy of a voltage source, such as a battery, and stores it for later use. The energy comes from moving charges, which are clumped together on capacitor plates. But like charges repel, so something needs to keep the charges in place. That something is another nearby capacitor plate where opposite charges are also being clumped.
If the space between the two plates is non-conducting, then the charges will feel an attraction to the other side but won't be able to do anything about, and this attraction balances the repulsion on their side. A capacitor has reached its, um, capacity when the voltage source is unable to overcome the repulsive force of the charges already on the plate. Each time you add an electron to the capacitor, it gets harder to add another one, because there's even more negative charge repelling the next electron. So every capacitor has a limit to the amount of charge per volt it can hold that is measured in farads.
Once the capacitor is fully charged, the energy is said to be stored in the electric field between the two capacitor plates. An analogous situation is a crane suspending a large hunk of steel in the air. The crane's engine supplies the energy necessary to lift the mass up, and the energy is then stored in the earth's gravitational field as potential energy. If the crane lets go of the steel, that potential energy is converted into kinetic energy and the steel slams into the ground. Similarly, if the capacitor is discharged, the electric potential transforms into kinetic energy, shooting the electrons out and establishing a current.
(In a further extension of this analogy, if a crane tries to lift something too heavy for too long, the cable will snap, damaging the whole setup. If too much voltage is applied to a capacitor, then the insulating material between the plates momentarily transforms into a conductor and charge shoots across, ruining the capacitor. This is lightning. When charge builds up during a storm, the air acts as an insulator between the clouds and the ground. Once the voltage gets too high, the air ionizes and creates a path for the charged electrons.)
What differentiates this from gravity is that the force pushing the charges off the plate is proportional to the number of charges on the plate in that instant. For the same reason that it gets harder and harder to add more charges onto a plate, the charges initially leave the plate very quickly. But then, as they leave, there are fewer repulsive charges in place, and the remaining ones leave more slowly.
When the rate at which some amount of stuff changes is proportional to the amount of stuff present, you have an exponential process. Radioactive decay is exponential because it depends on the amount of an isotope present in a sample. Population growth is ideally (that is, under perfect conditions, not as in I want it to be that way) exponential because the more people there are, the more children will be born.
The most recognizable feature of an exponential process is that the amount of time it takes for the process to double (or halve, or reach any specified multiple) is constant. This is where we get the idea of half-life. If some radioactive material has a half-life of a thousand years, and you've got 50 grams of it, then after a thousand years you will have 25 grams, but it takes another thousand years to get down to 12.5 grams.
Which leads us to this pretty graph:


As you can see, our capacitor discharged itself in a very cooperative exponential fashion. But it didn't cooperate fully. Every exponential decay has an associated "time constant" which is the amount of time that has to pass before the sample has reached 1/e of its original size. In an RC circuit, the time constant turns out to be the product of the circuit's resistance and capacitance. We had a 22 megaohm resistor and a 1 microfarad capacitor, which gave us a time constant of 22 seconds.
If you analyze the data we collected, however, our half-life turns out to be about 24 seconds. Now, there's a relationship between half-life and time constant, but right away it should be obvious that something is wrong. It absolutely has to take longer for the voltage to decay to 1/e (.368) its starting value than 1/2 its starting value. So if we accept that our calculated half-life is correct (which I'm willing to do, because it's such a pretty graph), then the time constant must be higher by about 13 seconds. For the time constant to be higher, there must be more total capacitance or resistance in the circuit.
But we have a fairly simple circuit that looks like this:


(We are fortunate that our circuit diagrams are not graded on artistic merit.)
When we close the switch, the battery charges the capacitor but is hampered by the resistor. The DMM is a digital multimeter which measures the voltage across the circuit. When we open the switch, the battery is no longer a part of the circuit, the electromotive force keeping the capacitor charged goes away, and the voltage discharges across the circuit. We measured this with the DMM.
Now, the wires have resistance, but not on the order of 12 or 13 megaohms. And it would be nearly impossible to add any capacitance to the system because capacitors in series have a lower net capacitance than that of each individual capacitor. Capacitors in parallel can add up, but we don't have anything in parallel when the switch is open. The only explanation is that the DMM has some internal resistance. As it turns out, the DMM is supposed to have an internal resistance of between 10 and 20 megaohms. So our 12 missing megaohms fit perfectly into that range. w00t.
As a final note, there was another section of the lab dealing with fast decay on the order of milliseconds. Because we can't measure that with our eyes, we used the oscilloscope to plot voltage versus time. As I explained before, the scope's display is a grid of squares. The width of each square corresponds to a length of time, the height to a voltage. How much time and voltage are calibrated with knobs. Anywho, we were measuring half-life, and we saw that the voltage decayed to half its original value across 1.5 squares. Our time knob was set to .5 milliseconds.
One of my partners will only take raw data, so he wrote down 1.5 * .5 milliseconds and left it at that, because he didn't want to make any math errors during the lab. My other partner whipped out his calculator and began typing 1.5*5x10-4 into it. I looked at them both rather strangely and said "Guys, you just divide by two. It's .75 milliseconds." They looked at me like I was crazy and kept writing/calculating. Sigh.

Friday, April 12, 2013

Stand Back! I'm Doing Fake Science.

(Sorry, Randall Munroe.)

Okay. So it turns out that simultaneously working full time and going to class nearly full time is hard. Who knew? Then there's that short story I was working on. And theoretically there are friends who need reminders of my existence as well. Anyway, that doesn't leave a lot of time for blogging. But here I am, doing a post about motherfucking magnets.

How do they work? I don't know. Something about spin and charge. My guess is there's no good explanation for where magnetism comes from (by which I mean the magnetic field, not why magnets attract) until I get to quantum mechanics.

Anywho, we did a lab on Wednesday to demonstrate some of the principles of magnetism. There was a neat section of the lab where we played with solenoids, magnets, and batteries to see how a changing magnetic field induces a current and all that jazz. But the main part of the lab was playing with an e/m apparatus, which looks something like this:


Yes, it was really that awesome. So, the two parallel coils of wire are known as a Helmholtz Coil and they produce a nearly uniform magnetic field between them. The bulb in the middle houses an electron gun and is filled with helium. The contraption to the right (we had a different one) is just a power supply that can vary its voltage and amperage.

This setup is apparently a pretty classic lab and is supposed to mirror a series of experiments done by J. J. Thomson when he discovered the electron and the ratio of its mass to its charge. The electrons in this experiment are visible as that blue-ish ring in the bulb. The blue itself is just the helium being excited by fast moving electrons. And the electrons are moving in a circle because, well, because magnetism is a weird force.

If you'll allow me to demonstrate with a crappy MS Paint illustration. The magnetic field only works on charges that are in motion relative to the field. And, unlike more traditional forces that just push or pull, the magnetic field creates a force that is perpendicular to both the velocity of the charge and the direction of the magnetic field (this is known as the cross product in vector notation).


If we imagine that this is a box, and you have a charged particle (blue) moving west, and a magnetic field (purple) directed north, then the resultant force on the charge (if it's negative) will be up. But our electron is moving in a circle. The reason why is that as soon as the electron changes direction due to the magnetic field, the force acting on it also points in a slightly new direction so that it's still perpendicular to the electron's motion. The end result is that the electron experiences a force that is everywhere at right angles to its motion, and this gives rise to the equations for circular motion.

So, in the experiment above, the magnetic field is directed in a horizontal line between the two coils, and the electron gun is aimed down. The cross product of that is a fuzzy blue circle.

How big the circle is depends on four quantities: the strength of the magnetic field, the voltage applied to the electrons, and the mass and charge of the electrons. So, if you know those first two, you can measure the ratio of charge to mass. You can't separate the two variables, however, because there's just the one equation. Robert Millikan eventually found a way to measure the charge of the electron alone.

But today, scientists know both values to incredible precision, and our job wasn't to figure out that ratio. Rather, we were trying to measure the strength of the magnetic field according to the radius of the electron circle and its speed. The relevant equation is r = mv/qB, which tell us that the faster the electron is moving (which depends on the voltage of the electron gun), the stronger the magnetic field (B) has to be to keep it at a constant radius.

We can also predict the value of the magnetic field based on the geometry of the Helmholtz Coil and the current it receives. Which leads to this graph:




We varied the strength of the current between 1.6 and 2.9 amps, which resulted in a magnetic field of between 1.30 and 2.34 milliteslas. But, as you can see, those values are all slightly higher than the predicted values for a given amperage. The average discrepancy is small, about 3.5%, which amounts to .06 mT. What could be causing this systematic error? A variety of things, really. It could be an inaccurate reporting of the current strength, or an imprecise model of the magnetic field, or really any number of things.

Or it could be the Earth's magnetic field, which a little googling tells me is between .03 and .06 mT. Science: It works, bitches.