The Mammoth Reads series is to be a (hopefully) regular to semi-regular shortlist of (hopefully) interesting things I've read recently. (Hopefully) you'll click a link or two.
Most of these lists will not have long, ridiculous, impossible-to-read titles like this one, but I figured I would kick this series off in irritating fashion.
Prof. Andrew Gelman counters a few claims from a Weekly Standard editorial by emeritus professor David Rubinstein, formerly of the University of Chicago at Illinois, in which Rubinstein claims that professors are paid too much for their "cushy" jobs. Rubinstein is of the opinion that the current system—namely tenure track—encourages laziness. Gelman makes some interesting observations about the function of good salaries and benefits in luring top-notch professors, and seems not to buy Rubinstein's impression that these are necessarily bad things. Gelman also suggests that Rubinstein simply might be a bit lazier than most college profs, that he might be erroneously using his own lack of zest for the classroom as a metric by which to measure his peers. (In all fairness to Rubinstein, he does seem to lob some valid criticisms regarding professorship in his original piece, which Gelman also links to.)
We all know by now that the quantum world doesn't make any kind of intuitive sense. A photon goes along its merry way existing in a state of wave-particle duality, and the minute someone tries to measure it the wave state collapses. (There's a joke about my Saturday nights in there somewhere, but I'll let someone else find it.) Well, the BBC has a nice human-readable explanation of a study that adds a new(ish) twist to the double-slit experiment. Traditionally in this experiment, photons are monitored individually as they pass through the slits, a form of "strong observation" that inevitably weakens the interference pattern and causes the photons to act more like particles. The new twist is a successful use of "weak observation" that preserves the interference pattern, allowing the observer to infer photons' paths by averaging the activity of a large number of them rather than attempting to monitor each individual photon.
Anyway, the article does a much better job than I do of sketching out the basics. I'm sure a scientist, or even a scientifically literate layman, would flog me for the rubbishy explanation in the preceding paragraph.
Are we living in the Anthropocene Epoch? Geologists think so, and based on their reasoning that we humans have left some permanent chemical and radioactive traces in our layer of the Earth, it's difficult to argue with them.
From the article:
Anthropocene, a term conceived in 2002 by Nobel laureate Paul Crutzen, means "the Age of Man", recognising our species' ascent to a geophysical force on a par with Earth-shattering asteroids and planet-cloaking volcanoes. Geologists predict that our geological footprint will be visible, for example, in radioactive material from the atomic bomb tests, plastic pollution, increased carbon dioxide levels and human-induced mass extinction.
Now that's a legacy to be proud of: planet killers.
(Disclosure/Tangent: I agree—based on my own uncanny and unchallengeable horse sense—with Bill Gates' assessment that small-scale green tech will not be enough to curb climate change; we need a paradigm shift in energy production. Being "green" is nice, but oftentimes it's easy to fall into the culture of buzzwords. Vinnie handles a few green pitfalls over at "Rifraff and Bugaboos.")
Why are researchers (especially medical researchers) unable to replicate experiments over time that initially yielded positive results?
Researcher and publication bias are obvious reasons that come to mind. Dr. Steven Novella, author of NeurologicaBlog, takes us through the Decline Effect as well as a few claims from a Nature News article that conflate quantum mechanics with the large scale. Novella is pretty reasonable about it, though, and acknowledges that the article does correctly identify the Decline Effect as (likely) a research artifact.
Remember, science is messy. It is not dogma and is always subject to revision.
Quantum computing scares and excites me. If it ever becomes viable, all of our current encryption systems—as I understand it, every last one of them—will become obsolete. Whereas current bits can exist as a 0 or 1, qubits can exist in both states at once and, thus, can process computations at mind-numbing rates. We're in an either/or world on the brink of becoming a both/and one. Because of this, quantum computing may become one of the most useful and powerful tools humans have invented. We may not understand the solutions it yields at first, but the potential for discovery of all kinds will swell suddenly.
The problem, however, is that quantum computers are very unstable and can only exist on a small scale. Current quantum computers rely on entanglement in order to work their magic, and the entangled state is an exceedingly fragile one: Even minimal interference from outside energy sources can break the system.
But what if a quantum computer didn't need to rely on entanglement in order to work? What if it actually relied on (or at least accepted) a certain amount of chaos while operating?
From the article:
In a typical optical experiment, the pure qubits might consist of horizontally polarized photons representing 1 and vertically polarized photons representing 0. Physicists can entangle a stream of such pure qubits by passing them through a processing gate such as a crystal that alters the polarization of the light, then read off the state of the qubits as they exit. In the real world, unfortunately, qubits rarely stay pure. They are far more likely to become messy, or 'mixed' — the equivalent of unpolarized photons. The conventional wisdom is that mixed qubits are useless for computation because they cannot be entangled, and any measurement of a mixed qubit will yield a random result, providing little or no useful information.
But Knill and Laflamme pondered what would happen if a mixed qubit was sent through an entangling gate with a pure qubit. The two could not become entangled but, the physicists argued, their interaction might be enough to carry out a quantum computation, with the result read from the pure qubit. If it worked, experimenters could get away with using just one tightly controlled qubit, and letting the others be battered by environmental noise and disorder. [...]
A debate continues about the efficacy of disorder in quantum computing systems, and I suppose we'll see just how much this technology evolves in the coming years.
Of course, I'm just another moron with a blog who can't be trusted to switch the laundry, but this one got me real excited.