Category: Technology

Beautifying the Kindle App for Android

The Android robot is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License.

If you're anything like me, you are very particular about your reading devices. One of the big improvements moving from the older generations of the Kindle to the Paperwhite was the ability to add new fonts to the device quite easily, enabling you to pick the perfect font for whatever book you happened to be reading. Prior to that, you had to jailbreak the device.

However, despite its fantastic handling of ebooks, the Paperwhite doesn't handle PDFs very well, and I'm needing to read and annotate PDFs on the go more and more these days because that's the format in which I prefer to read journal articles. Not wanting to open up my laptop just to read a PDF, I bought a Nexus 7. (Unfortunately, most PDF markup apps are terrible, so I'm still in a bit of a pickle. Foxit crashes; Skitch allows only freehand highlighting; QuickOffice doesn't allow any PDF markup; and Adobe Reader, I hate to say, comes out on top, although it's far from ideal. Suggestions are welcome.)

But this left me with yet another predicament, as I would now be carrying around four devices regularly: an e-reader, a smartphone, a tablet, and a laptop. That seemed, well, superfluous, and as hard as it was to make the final call—because I really like the device—I have decided to essentially retire the Paperwhite and get used to reading books on my tablet. The transition was not as hard as I'd imagined. Reading on a tablet screen is in fact very nice. At this point, I can't say I miss e-ink all that much.

The Kindle app is a vexing beast, though. It's fairly nice at first sniff, but it doesn't allow the reader to share quotes to social networks the way the Paperwhite does. Perhaps more annoying yet is the inability to change or add fonts; the app uses the system default serif and sans-serif. That's it. On my Nexus these defaults are DroidSerif and DroidSans, respectively. Both are fine, but when I read books, I like fonts that are used as typefaces in print, despite having overcome almost entirely my nostalgia for the page, with a few exceptions. (I read A Song of Ice and Fire in hardcopy, for instance.)

So I found an article from Android Authority and, using the method outlined in the section titled "Manual method using file manager app," located near the end of the piece, I changed the default system serif. (Note: Swapping in these new font files will affect any app or browser that relies on them. The font I've selected is now used selectively in Chrome and the Goodreads app, among others, and this is just fine by me. In fact, it's made reading articles in Chrome slightly more palatable.) Unrooted options are available to you, but for reference, my Nexus is rooted, running Cyanogenmod 10.2.

For best results in the Kindle app, you'll want to find True Type Fonts (TTF) with the following four variants available: Regular, Italic, Bold, BoldItalic. Those should cover any formatting you're likely to find in a Kindle ebook. Personally, I recommend Latin Modern Roman, available at Font Squirrel for free. This font is actually provided in OTF format, but you can Google "OTF to TTF converter" and find a free conversion option you're comfortable with. Then you'll be able to collect the TTF files you need in one place and transfer them to your device however you'd like. (I just uploaded them to Google Drive.)

I first fell in love with Latin Modern Roman when I started using LaTeX to create PDFs. It provides a positively book-like experience, which is to say it's very easy to read, even more so than DroidSerif, itself a decent font all things considered. Sorts Mill Goudy and Baskerville are two other good options, but not all of the variants I listed are available for free.

The only downsides to using the method I did are that you'll need 1) to be careful to backup the originals and 2) to rename and transfer TTFs any time you feel like changing the font. Personally, I didn't like the font changer I tried, preferring to just download and transfer the necessary files directly, as described.

Here's a screenshot of my Kindle app using Latin Modern Roman.

The Kindle app for Android using Latin Modern Roman as the default serif

The Kindle app for Android, with Latin Modern Roman set as the default serif. Click to view full-size version.

Pretty nice, eh? Now I can get back to allowing incessant Facebook notifications interrupt my reading.

Update (02.12.2014): I only realized after flashing a new CM version, that you'll need to repeat this process every time you update. As such, I now keep the Latin Modern Roman TTF files socked away on my SD card for quick transfer in the future.

Image: The Android robot is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License.

Mammoth Reads: Scratching the Surface of Free Will (Determino-compatibo-dualism?)

I was going to write a whole post about my take on free will... but why? I will say nothing that hasn't been said before in much more worthy fashion by people with philosophical and scientific qualifications that can't be garnered simply by idly perusing RSS feeds on Saturday afternoon in one's underwear. So I'm just going to give you a list of articles and essays I've read over the past few months that, I think, adequately parse different aspects of the free will debate. (I first heard about Benjamin Libet's experiments—just Google him, you'll find them—a few years ago, where that seed languished more or less undeveloped until recently.)

Two things before the list:

  1. Based on the current extent of my reading, I fall into the determinist camp these days, and I don't believe that, given the same conditions, we can choose other than we do.  Even if, as some have posited, random events complicate this statement, I don't see where freedom or control exist in indeterminacy.  Either way, we're beholden to events of which we have little or no knowledge.
  2. I am now retroactively mildly embarrassed by some of my previous comments on inhibition, originally written in response to a post by Robin Hanson at Overcoming Bias regarding culpability in sleep rape. My suspicion of the punitive instinct stands, as does my reluctance to equate waking and sleep states, but my current thinking on free will demands that I revise my insistence on an agent's having a choice to prevent or allow an action to take place once that person becomes aware of his/her behavior. Though so-called "free won't" isn't an entirely unhelpful concept, I'm backpedaling now on my insinuation that inhibition is a controlled reaction (duh). In my post I cited an article entitled "Do Conscious Thoughts Cause Behavior?", of which I only ever read about half, maybe a bit less, before deciding I understood Baumeister's point (read: tired of it). Conscious thoughts may have a role in decision-making, but they are determined as well—even if consciousness itself is astoundingly complex—and the experience of awareness is merely a byproduct of brain functions that we for the most part do not perceive. However, the review article does point to and attempt to counter Thomas Huxley's steam whistle hypothesis and, in so doing, perhaps unwittingly provides what I think is actually a pretty splendid shorthand for how consciousness probably works.  I provide, with caveats mostly irrelevant to this already overly long list item,  Baumeister's explanation of Huxley's analogy: "It [the steam whistle hypothesis] says conscious thought resembles the steam whistle on a train locomotive: it derives from and reveals something about activity inside the engine, but it has no causal impact on moving the train." There is a larger discussion to be had about the proper role of punishment in light of an increasingly nuanced understanding of consciousness, but I thought it important (for me, at least) to outline where I feel I erred in my original criticism of Hanson.

Ok. The list that follows is presented in whatever order will strike me as appropriate during the following minutes, suffice to say the first two are my favorites.

James B. Miles. 'Irresponsible and a Disservice': The integrity of social psychology turns on the free will dilemma. British Journal of Social Psychology.
Miles criticizes the view that knowledge of a lack of free will would send society into an amoral/immoral tailspin and that people would, by definition, become selfish cretins.  Largely a criticism of social psychology and its relationship with free will (as Miles puts it, philosophical libertarianism, which is distinct from the political philosophy of the same name), this paper also provides a nice summary of determinismcompatibalism, and libertarianism, essential concepts to understand in order to appreciate the debate.

Sam Harris. Free Will. Simon and Schuster.
In what is, besides Miles's paper, my favorite piece of the bunch, Harris publishes what he claims will be his "final word" on his opinions regarding free will. This is a highly digestible essay that tackles a number of issues, from culpability to legal implications and personal understanding. (If free will were truly an illusion, we'd have to accept, as Harris says, that psychopaths were simply unlucky to have been born as they were.  Thus, hatred could not be warranted, nor could cruel punishment.  Presumably we would do what is necessary to protect society from murders, rapists, etc., and forego the revenge instinct, which is a programmed survival reaction but one that is not coherent once we take agency out of the equation.)

If you don't ever read this piece, do one thing that he suggests therein: sit down one day and simply pay attention to how your thought process operates. Thoughts just pop in there, to cop a Ray Stanz line from Ghostbusters. How could they do anything but?

(UPDATE: 4/14/2012)
Joshue Greene, Jonathan Cohen. For the law, neuroscience changes nothing and everything. Philosophical Transactions of the Royal Society B: Biological Sciences.
Unfortunately, I read Greene and Cohen's article after writing this post, and it may be, as far as the practical implications of a modified conception of free will are concerned, the most interesting one on the list.  Drawing a distinction between consequentalist and retributivist compunctions, the authors argue for the former as a progressive, scientifically justified view of punishment rather than the revenge-driven legal system that seeks to wring remorse out of the prisoner, or right some cosmic moral scale.  The law's default disposition, compatibalism, will be challenged as advances in neuroscience make the causal chains that lead to decision-making more apparent; with this advance knowledge, a system aimed at punishment, rather than prevention, will seem untenable.

Their basic position may be summarized thus:

Existing legal principles make virtually no assumptions about the neural bases of criminal behaviour, and as a result they can comfortably assimilate new neuroscience without much in the way of conceptual upheaval: new details, new sources of evidence, but nothing for which the law is fundamentally unprepared. We maintain, however, that our operative
legal principles exist because they more or less adequately capture an intuitive sense of justice. In our view, neuroscience will challenge and ultimately reshape our intuitive sense(s) of justice. New neuroscience will affect the
way we view the law, not by furnishing us with new ideas or arguments about the nature of human action, but by breathing new life into old ones. Cognitive neuroscience, by identifying the specific mechanisms responsible for behaviour, will vividly illustrate what until now could only be appreciated through esoteric theorizing: that there is something fishy about our ordinary conceptions of human action and responsibility, and that, as a result, the legal principles we have devised to reflect these conceptions may be flawed.

Green and Cohen's distinction between what law wants (retribution) and what people will want (compassion, or consequentalism) largely drives their assumption that certain types of large-scale change will be unavoidable, and warranted. However, they remain of the opinion that law has been molded in such a way that it may incorporate these new findings, as well a shift in philosophy, without requiring an entirely new framework. Rather, the mechanisms of the system may simply be applied in a manner accordant with an understanding of free will and culpability informed by the latest science.  Libertarianism is out, and soon, compatibalism will be, too, they say.

For fans of thought experiments, the Boys from Brazil problem is an absolute must. Are we really so different than Mr. Puppet?

Massimo Pigliucci. The Incoherence of Free Will. Psychology Today.
I haven't included any writing by Daniel Dennett, a prominent determinist, because I haven't read any yet. But both Pigliucci and Harris mention his concept of a "free will worth having," an explanation that Pigliucci summarizes as follows:

What all of this seems to suggest is that the undeniable feeling of "free will" that we have is actually the result of our conscious awareness of the fact that we make decisions, and that we could have — given other internal (i.e., genetic, developmental) and external (i.e., environmental, cultural) circumstances — decided otherwise in any given instance. That’s what Dennett called a type of free will that is “worth having,” and I consider it good enough for this particular non-dualist, non-mystically inclined human being.

Whereas Pigliucci likes this explanation, Harris, in Free Will, accuses Dennett of "changing the subject." Regardless of this disagreement (in which, for the record, I side with Harris at the moment), Pigliucci does a nice job of tearing down dualist notions of free will and summarizing reservations many people tend to have when they are forced to consider that they may not have control over their actions in ways they previously may have assumed.

Kerri Smith. Neuroscience vs philosophy: Taking aim at free will. Nature News.
This is a traditional news piece over at Nature News that describes the respectively different treatments free will receives from scientists and philosophers. Most neuroscientists, the article states, are content to attack dualist notions of free will without considering the more robust philosophical debate that surrounds the issue. Philosophers, however, must explain how freedom to choose otherwise might exist in a causal physical system such as the one in which we, and our brains, exist. One of the large issues has been a lack of consensus regarding a working definition of free will from which further research and rationalization can proceed. For any of you interested in Libet's research, as well as recent studies that confirm and build upon knowledge of unconscious decision-making, the article cites and summarizes a few references. I've only read summaries myself, and I'm not sure full-text is widely accessible.

Björn Brembs. Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates. Proceedings of the Royal Society B: Biological Sciences.
Of all the pieces included in this list, I am farthest removed from having read this one.  I have to go back through my highlighted PDF to refresh my memory, but Brembs, in searching for a different scientific understanding of free will, rejects both the dualist and determinist approaches.  Instead, he conjures quantum indeterminacy:

That said, it is an all too common misconception that the failure of dualism as a valid hypothesis automatically entails that brains are deterministic and all our actions are direct consequences of gene–environment interactions, maybe with some random stochasticity added in here and there for good measure [2]. It is tempting to speculate that most, if not all, scholars declaring free will an illusion share this concept. However, our world is not deterministic, not even the macroscopic world. Quantum mechanics provides objective chance as a trace element of reality. In a very clear description of how keenly aware physicists are that Heisenberg's uncertainty principle indeed describes a property of our world rather than a failure of scientists to accurately measure it, Stephen Hawking has postulated that black holes emit the radiation named after him [11], a phenomenon based on the well-known formation of virtual particle–antiparticle pairs in the vacuum of space. The process thought to underlie Hawking radiation has recently been observed in a laboratory analogue of the event horizon [12,13]. On the ‘mesoscopic’ scale, fullerenes have famously shown interference in a double-slit experiment [14]. Quantum effects have repeatedly been observed directly on the nano-scale [15,16], and superconductivity (e.g. [17]) or Bose–Einstein condensates (e.g. [18]) are well-known phenomena. Quantum events such as radioactive decay or uncertainty in the photoelectric effect are used to create random-number generators for cryptography that cannot be broken into. Thus, quantum effects are being observed also on the macroscopic scale. Therefore, determinism can be rejected with at least as much empirical evidence and intellectual rigor as the metaphysical account of free will. ‘The universe has an irreducibly random character. If it is a clockwork, its cogs, springs, and levers are not Swiss-made; they do not follow a predetermined path. Physical indeterminism rules in the world of the very small as well as in the world of the very large’ [9].

Brembs studies invertebrates, and in this paper he is most concerned with concocting working models of behavioral variability. Quantum mechanics is often used to obscure dishonest claims, to shield them behind the shroud of mystery the indeterminate world underlying our existence provides, but Brembs doesn't seem to be invoking it in such a way. Citing some interesting studies with fruit flies and leeches, he illustrates cases of seemingly spontaneous decision-making and behavioral variability in invertebrates exposed to controlled constant stimuli. It could be said, of course, that each action affects subsequent actions deterministically regardless of the constancy of the stimulus, but I'm already out of my ken with this entire post.

I highly recommend Brembs's paper.

Eddy Nahmias. Is Neuroscience the Death of Free Will? The New York Times.
This article doesn't impress me much, as Nahmias seems more concerned with rhetorical gymnastics geared toward discussing an alternative definition of free will than he is dealing honestly with the gruesome holes neuroscience is poking in the formulation most people probably consider when the term is used: that we are autonomous agents able to choose between one or more outcomes and that, given the chance, we could choose otherwise.  (In other words, most people would probably concede that our actions are in some way affected by the physical characteristics of our brains, but unprompted, my guess is that most of those people would also likely assert that they are able to control the trajectory of resultant actions once conscious awareness sets in.)

Nahmias instead posits the following definition, which sounds a bit like Dennett's:

These discoveries about how our brains work can also explain how free will works rather than explaining it away. But first, we need to define free will in a more reasonable and useful way. Many philosophers, including me, understand free will as a set of capacities for imagining future courses of action, deliberating about one’s reasons for choosing them, planning one’s actions in light of this deliberation and controlling actions in the face of competing desires. We act of our own free will to the extent that we have the opportunity to exercise these capacities, without unreasonable external or internal pressure. We are responsible for our actions roughly to the extent that we possess these capacities and we have opportunities to exercise them.

But what Nahmias seems to have described is consciousness, not free will. For him, free will represents the opportunity to perceive and experience the various facets of consciousness, including the prepackaged illusions with which it shipped, free from coercion.  To me, it sounds like we're now talking about something else altogether. But, read it for reference, if you'd like.

SHOWDOWN: Pigliucci v. Coyne
The following three articles represent a public discussion (argument) between Massimo Pigliucci and Jerry Coyne, in which Pigliucci takes a compatibalist tack while Coyne defends from the determinist's corner.  (Actually, Coyne started it with an op-ed for USA Today.)  Coyne tends to rub some people the wrong way, even those who might otherwise agree with him, as he's often both quick and a bit fervent on the draw.  Still, I think any notion of free will must describe how that will circumvents or changes the outcome of physical events, for it seems any will beholden to physical laws as we understand them would exist within the broth of causal relationships that surround it, however obscured by complexity those relationships might be.  At any rate, I'll preempt the list of articles with the observations of a commenter on Jerry Coyne's second piece, Ron Murphy:

Given that there is no evidence yet of anything that might be considered non-physical then the null hypothesis is that everything does follow physical laws. We are looking for exceptions. On this basis the dualist notion of free-will is the alternative hypothesis, and that it does not exist is the null hypothesis.

That free-willies think that theirs should be the null hypothesis is based only on their ‘feeling’, the historical and personal notion that we have free-will. But ‘feeling’ that something is the case doesn’t cut it as sufficient evidence, or reason, to think that it should form the basis of the null hypothesis.

Whether it is traditional dualist free-will (or soul), or even this other form of free-will that is supposed to be non-dualist and yet has no concrete explanation, they are both alternative hypotheses awaiting even a good definition let alone the possibility of falsification.

The illusory nature of free-will is simply the common sense view derived from all we know about the universe so far. That some of us don’t like it, and even that all of us appear to act as if we have free-will, is no support for that alternative hypothesis, whichever way it is framed.

So now that I've attempted to prime you in favor of Coyne—oh, how dastardly of me—the respective parries and thrusts:

 

My Dubious and Tenuous Conclusions (*grain of salt not included)
As I said, I'm siding with the determinists for the time being, with no less wonder than I've ever had in this weak heart, with no less awe in this shackled mind. I will caution those who may be tempted to view a deterministic world as one in which actions mean nothing to differentiate determinism from fatalism (Harris makes this important distinction in Free Will). Were we all to stop acting, the world would be an unrecognizable place. A lack of free will, in the magical terms it has most often been described, is no cause for despair. After all, you have no choice but to live the illusion. And should free will one day be unequivocally proven to be such an illusion, you would not find yourself in a different plight than that of all prior generations. You would simply be better acquainted with your nature.

Personally, I find the concept "freeing" in a certain sense: my actions, good, bad, or indifferent, occur due to factors beyond my control, and in recognizing this I can "choose" to bend my "will" toward changing those actions and characteristics I don't like. I can show more compassion to others and deal better with their faults and shortcomings, knowing that I too am in the same boat. The instinct to tune out and give up does not, to me, hold much appeal. I feel, for whatever reason, that a path to self-improvement is more evident, which is not any sort of proof; it's just the manner in which I've come to view the matter. Some will view it otherwise.  But as you've no doubt noticed, our language is currently unequipped to deal with the implications of such a shift in thinking: I was unable to write this paragraph without the insinuation of agency and choice—a fact that may, admittedly, say more about my prowess as a writer than anything else. I have slipped, as I often do, into incoherence.

There is simply no escaping the storm and, as always, much more reading to be done.

Taking the Driver out of Driving

I was tempted to place the title of this thing in the imperative, but I didn't want to be pushy.  More than that, doing so would serve mostly as a cheap provocation to arouse the ire of a friend of mine who contends that Americans, indefinitely, will refuse to cede control of their vehicles to computers, sensors, and robots.  Something in the American Spirit, he says, will successfully dismantle the widespread adoption of technology that would drastically reduce loss of both life and wealth.  (I know car buffs will lament loudly, and you must know, for the sake of openness, that I'm disinclined to give that reality priority over more important considerations.  Sorry, car buff friends.)

I'm not so sure.  Besides supporting automated driving and all it might one day offer—really, continuing to allow humans to pilot automobiles would be insane in a society with a viable alternative—I'm inclined to think a driverless society is a foregone conclusion, whether by law or choice, though I hope the latter prevails.  Provided cheap, efficient, and reliable technology, I see no logical reason (though I can think of a few illogical ones) that Americans as drivers would forcefully oppose a transition away from self-piloted vehicles.  (My argument here, it should be said, assumes a technology that meets those conditions.)  The upside is simply too great;  just as many folks at first rejected the automobile, eventually the advantages it provided made previous technologies obsolete.  The same, I think, will be true for automated cars.

Wired has an interesting piece regarding the myriad legal questions we will need to grapple with, and I have no doubt that there will be a number of growing pains as we struggle to adapt to the shift.  But safety and economic considerations will, after a time, decide the matter.  One day, the excuses will simply run out.

Social Competition: Google+, Facebook, and Whoever Else Wants to Play

Competition can be a very a good thing. Google+, after a little initial hiccup, rolled out a highly functional mobile app (Android's was more functional than the iPhone's, from what I hear) with resharing, user tagging, and a nice interface. Facebook now responds with v1.7 for Android  after a number of incremental updates, albeit slowly, with tagging, a photo swipe interface, and a slightly more functional design, the former of which features should have been available a long time ago. Maybe Facebook would have done this eventually anyway, but their app has been notoriously mediocre for quite awhile now, until  today.

The more users Google or any other social network can siphon from Facebook the better.  If Facebook continues to feel meaningful pressure from its competitors, Zuckerberg's crew will continue to add features and improve their own service.

Now if Google would only integrate Reader with Google+ I'd be ecstatic.

Mammoth Reads: The Anthropo-Pedagogio-Quantumnal Edition

The Mammoth Reads series is to be a (hopefully) regular to semi-regular shortlist of (hopefully) interesting things I've read recently.  (Hopefully) you'll click a link or two.

Most of these lists will not have long, ridiculous, impossible-to-read titles like this one, but I figured I would kick this series off in irritating fashion.

You Are a Poor Scientist, Dr. Venkman

Prof. Andrew Gelman counters a few claims from a Weekly Standard editorial by emeritus professor David Rubinstein, formerly of the University of Chicago at Illinois, in which Rubinstein claims that professors are paid too much for their "cushy" jobs.  Rubinstein is of the opinion that the current system—namely tenure track—encourages laziness.  Gelman makes some interesting observations about the function of good salaries and benefits in luring top-notch professors, and seems not to buy Rubinstein's impression that these are necessarily bad things.  Gelman also suggests that Rubinstein simply might be a bit lazier than most college profs, that he might be erroneously using his own lack of zest for the classroom as a metric by which to measure his peers.  (In all fairness to Rubinstein, he does seem to lob some valid criticisms regarding professorship in his original piece, which Gelman also links to.)

Riding the Collapsing Wave

We all know by now that the quantum world doesn't make any kind of intuitive sense.  A photon goes along its merry way existing in a state of wave-particle duality, and the minute someone tries to measure it the wave state collapses.  (There's a joke about my Saturday nights in there somewhere, but I'll let someone else find it.) Well, the BBC has a nice human-readable explanation of a study that adds a new(ish) twist to the  double-slit experiment.  Traditionally in this experiment, photons are monitored individually as they pass through the slits, a form of "strong observation"  that inevitably weakens the interference pattern and causes the photons to act more  like particles.  The new twist is a successful use of "weak observation" that preserves the interference pattern, allowing the observer to infer photons' paths by averaging the activity of a large number of them rather than attempting to monitor each individual photon.

Anyway, the article does a much better job than I do of sketching out the basics.  I'm sure a scientist, or even a scientifically literate layman, would flog me for the rubbishy explanation in the  preceding paragraph.

The Age of Man

Are we living in the Anthropocene Epoch? Geologists think so, and based on their reasoning that we humans have left some permanent chemical and radioactive traces in our layer of the Earth, it's difficult to argue with them.

From the article:

Anthropocene, a term conceived in 2002 by Nobel laureate Paul Crutzen, means "the Age of Man", recognising our species' ascent to a geophysical force on a par with Earth-shattering asteroids and planet-cloaking volcanoes. Geologists predict that our geological footprint will be visible, for example, in radioactive material from the atomic bomb tests, plastic pollution, increased carbon dioxide levels and human-induced mass extinction.

Now that's a legacy to be proud of: planet killers.

(Disclosure/Tangent: I agree—based on my own uncanny and unchallengeable horse sense—with Bill Gates' assessment that small-scale green tech will not be enough to curb climate change; we need a paradigm shift in energy production.  Being "green" is nice, but oftentimes it's easy to fall into the culture of buzzwords.  Vinnie handles a few green pitfalls over at "Rifraff and Bugaboos.")

Vanishing Act

Why are researchers (especially medical researchers) unable to replicate experiments over time that initially yielded positive results?

Researcher and publication bias are obvious reasons that come to mind. Dr. Steven Novella, author of NeurologicaBlog, takes us through the Decline Effect as well as a few claims from a Nature News article that conflate quantum mechanics with the large scale. Novella is pretty reasonable about it, though, and acknowledges that the article does correctly identify the Decline Effect as (likely) a research artifact.

Remember, science is messy.  It is not dogma and is always subject to revision.

404 Error — This Qubit Cannot Be Found

Quantum computing scares and excites me.  If it ever becomes viable, all of our current encryption systems—as I understand it, every last one of them—will become obsolete.  Whereas current bits can exist as a 0 or 1, qubits can exist in both states at once and, thus, can process computations at mind-numbing rates.   We're in an either/or world on the brink of becoming a both/and one.  Because of this, quantum computing may become one of the most useful and powerful tools humans have invented.  We may not understand the solutions it yields at first, but the potential for discovery of all kinds will swell suddenly.

The problem, however, is that quantum computers are very unstable and can only exist on a small scale.  Current quantum computers rely on entanglement in order to work their magic, and the entangled state is an exceedingly fragile one:  Even minimal interference from outside energy sources can break the system.

But what if a quantum computer didn't need to rely on entanglement in order to work? What if it actually relied on (or at least accepted) a  certain amount of chaos while operating?

From the article:

In a typical optical experiment, the pure qubits might consist of horizontally polarized photons representing 1 and vertically polarized photons representing 0. Physicists can entangle a stream of such pure qubits by passing them through a processing gate such as a crystal that alters the polarization of the light, then read off the state of the qubits as they exit. In the real world, unfortunately, qubits rarely stay pure. They are far more likely to become messy, or 'mixed' — the equivalent of unpolarized photons. The conventional wisdom is that mixed qubits are useless for computation because they cannot be entangled, and any measurement of a mixed qubit will yield a random result, providing little or no useful information.

But Knill and Laflamme pondered what would happen if a mixed qubit was sent through an entangling gate with a pure qubit. The two could not become entangled but, the physicists argued, their interaction might be enough to carry out a quantum computation, with the result read from the pure qubit. If it worked, experimenters could get away with using just one tightly controlled qubit, and letting the others be battered by environmental noise and disorder. [...]

A debate continues about the efficacy of disorder in quantum computing systems, and I suppose we'll see just how much this technology evolves in the coming years.

Of course, I'm just another moron with a blog who can't be trusted to switch the laundry, but this one got me real excited.

Adopt, Adapt, and Improve: Two Free and Easy Ways to Boost Efficiency and Reduce Repetitive Stress

That title sounds like something I never hoped I'd write. The first part is admittedly stolen from the Round Table via this Monty Python sketch.

Image Courtesy of DevilCrayon under a Creative Commons Attribution-Noncommercial 3.0 Unported License.

If you're an office worker like me, you probably spend quite a bit of time clicking a mouse and pounding on a keyboard.  The time you spend doing this also might lead to some manner of repetitive stress injury.  In my case, my right index finger is nearly perpetually swollen, stiff, and in pain because I learn my lessons slowly and rail in the face of common sense when it comes to my own well-being.

There are a number of ways to combat your office-wrought deterioration.  You could drop money on ergonomic products like gel pads to support your wrist or braces designed to prevent the common motions that bring on Carpal Tunnel Syndrome, and while there is some debate as to how efficacious many of these interventions are, they'll probably bring you some physical respite.   Your other option would be to take the less expensive route and attempt to reduce the number of mouse clicks and keystrokes you perform each day.   Here are two ways you could do that.

AutoHotkey

AutoHotkey is a tool known — I would imagine — to most computer nerds, and while my brother, a Computer Science major, recommended I learn to use it, I took quite awhile to start digging into it.  Before I go any further, let me stress that I am a computer/coding/technology enthusiast.  I learn what I can and pick up things here or there, and for personal purposes, I'm relatively proficient, but as a handful of my friends and my aforementioned brother are either professionally or scholastically involved in the computer fields, I should extend the caveat that you take my tech advice as gospel at your own peril.  Indeed, I have a pronounced case of cybernetic penis envy.  Read that as you will.

Anyhow, AutoHotkey essentially allows you to write scripts, macros, shortcuts, etc. once you've downloaded the program.  The nice advantage to AutoHotkey is its simplicity.  Even a dullard like me can manage to streamline a few processes and cut down the daily digital (think fingers) mileage.  For instance, I've assigned shortcuts that open up the programs I use most often.  In the following example, "#" represents the WIN key, and "w" represents, well, the letter "w":

#w::
Run WINWORD.EXE
return

This bleeding simple code launches Microsoft Word when you press WIN+w.  Let's say you're running a program like Firefox that doesn't have such an obvious Windows call:

#f::
Run C:\Program Files\Mozilla Firefox\firefox.exe
return

So if you pres WIN+f, Firefox will launch without your having to go to your Desktop and find the icon or even start the program from your Quick Launch bar.  The problem with the way I've written the Firefox shortcut is that I haven't made it incredibly portable.  In other words, if I wanted to convert the AHK (AutoHotkey format) file into an EXE and run it on another computer, it might not work depending on the local configuration or Windows version running on that machine.  There are ways to make it more portable like using the built-in variable %A_ProgramFiles% in place of C:\Program Files (this is paraphrased from the Quick-start Tutorial on the website), but I have no imminent plans to do so.  You'll have to check AutoHotkey's documentation.

Other than that, I have to copy and paste a number of form letters for a variety of reasons and send them to people via email.  I've gotten pretty quick at navigating from file to file, but every time I go on a rampage, my right hand begins complaining and cramping up something fierce.  Why not write a script to type everything out for me?  That way I simply create a new email and press the shortcut.  In the following example, "^" stands for Ctrl, "!" stands for Alt, and {Enter} sends a Return/Enter keystroke:

^!h::
Send Dear Widgets Inc.,{Enter}{Enter}I am extremely displeased with the quality of your widgets. I demand a full refund for the widgets I have purchased in bulk, and I plan to take my business to Customized Widget Solutions.{Enter}{Enter}Sincerely,{Enter}{Enter}Lord Knickerswitch
return

There is probably an easier way to do this, and please, if anyone who actually knows what they're doing wants to posit a few suggestions, I'd love to hear them. If you've downloaded AutoHotkey already, write this into a Notepad file, save as an AHK file, and run it. Then open a new Notepad file, place your cursor in the body and hit Ctrl+Alt+h. See what happens.  If you performed all actions correctly, you should have seen this letter typed out before your very eyes after pressing just three keys.  Your joints will thank you.

I stress again, these are very elementary examples of two things I've done with a base and exceedingly simple knowledge.  The AutoHotkey documentation provides a very complete reference of the variables and other functions available.  They are numerous, and hopefully, I'll have some more layman updates for which my brother and tech-pro friends can chastise me.

NiftyWindows

The second bit is more of an endorsement and less of an example.  Download the NiftyWindows EXE from Enovatic-Solutions.

This program also uses AutoHotkey, so to use NiftyWindows, you'll need to download the former.  Reading through the features, you'll notice that NiftyWindows provides a set of mouse and keyboard shortcuts that help in dealing with the dearth of simultaneous windows you're liable to open throughout a full working day.  These shortcuts allow the user to quickly resize or make windows transparent, stick windows on top so that they stay in view as you click through others, minimize, close, and roll up all the annoying work and non-work related windows littering your monitor.

In the twenty minutes or so it takes to program the NiftyWindows shortcuts into your muscle memory and grow comfortable with using the various interactions, you'll have saved yourself future time, hassle, keystrokes, and mouseclicks.  As far as the website notes, this program works with Windows XP and previous versions.  I haven't tested it on Windows Vista or  Windows 7, so I cannot vouch for it in those environments.  However, I do run Windows 7 at home, and I wouldn't be surprised if NiftyWindows' benefits are much less pronounced when used with 7, which is a much better and more convenient operating system than XP, as it should be after all this time.

NiftyWindows is also an open-source project under the GNU General Public License, so you're free to modify it to suit your needs if you're able and willing.

To get any real benefit from either of these solutions, you'll need to dig through the documentation yourself, and if there are AutoHotkey junkies or efficiency gurus out there with suggestions that are easy to implement for idiots like me, please post comments.  I'd also like to hear about anything that helps refine or correct the information above.

Children of the Office, I Implore You

homeoffice

Image Attribution: http://www.flickr.com/photos/paladin27/ / CC BY-NC 2.0

I am forced to sit here and suffer through another long afternoon of pretending to work largely because I have become more efficient as a worker.  I wouldn't even go as far as to say that I've automated everything because, in truth, I haven't automated anything.  I've simply succeeded in cutting down the number of steps it takes to complete certain tasks, eliminated needless components of the job, and don't have to amass a library of printed pages to do one simple thing on the computer.  Combined with a relatively high level of aptitude for quickly executing brain-wasting computer work, my total output exceeds that of a normal worker  by High Noon.

Mind you, I'm running my own internal statistics, and it is a rare occasion indeed that such numbers should be trusted or taken at face value, but I assure you, any discrepancy between my findings and reality are not due to any nefarious deed on my part.  I'm fairly confident that any independent and unbiased panel of experts would come to similar conclusions, perhaps pushing my time-efficiency estimate back by a maximum of two hours.

The difference is largely generational.  Walk in the front door of my office, and you will find a large wall of file cabinets.  What they contain, I do not know.  I'll give them the benefit of the doubt that some of the documents contained therein are kept necessarily in print form.  There are legal entanglements I don't anticipate experiencing that might loom over those with a different job description, but regardless of this fact, I'd bet a quick combing of the archives would effectively reduce the lot by at least half.  There was a day when that sort of pack rat mentality probably served the office worker well.  A reliable filing system was tantamount to maintaining the stability of an office's everyday operations, and in many ways, it still is.  But the filing of today does not require paper.  There is virtually nothing that can't be done without the aid of a printer or even a fax machine (why we still have one of those, I can't imagine) because all paper does, except in very rare instances, is slow you down.

This is all obvious stuff, chapter headings in the Child's Office Primer, and non-adherence to a paperless office can more often than not be attributed to a lack of desire or effort to train the brain to utilize the tools available to it.  It's adherence to the Old School, the paper trail, which is no more than a crutch these days.

I don't mean to engage in a Young vs. Old argument here, though the lines do tend to fall slightly along those lines.  Hell, one of my high school English teachers was a coder and programmer at seventy-six, an age at which one surely has plenty of excuses to resist adopting new ways of doing things.  If he would have spurned everything post-dating the electric typewriter, he could have been forgiven for beholding the swelling tides with the scowl of a seasoned curmudgeon.  To this date (he, sadly, passed away a few years ago), his final knowledge of computing probably still exceeds mine, so what I'm on about here is not complicated.

Administrative tasks have been simplified to the extent that it's almost silly to rent office space anymore.  There is nothing in my job description that I couldn't do from the comfort of my own home without ever stopping to pull up my pants.  Same goes for everyone else, and since I am in a vindictive mood today, I'll just go ahead and shift the blame for my having to dance through this intricate pantomime of artificial busyness to everyone else.

If we all work together, goddammit, we'd spend less time at our jobs, get more done both professionally and personally, and save a few trees while we're at it.

Who's with me?

Correction (10/21/2009): Due to a silly grammatical oversight, the title originally read "Children of the Office, I Implore Thee".  Thanks to commenter vet's bringing the error to my attention "thee" has been changed to "you."

Human 2.0: Vague Principles of Destructive Evolution

255241547_80eb1c2ea0

Image Attribution: http://www.flickr.com/photos/eurleif/ / CC BY-SA 2.0

There are too many stimuli and no way to unhook from the Delivery System.  Every thirty seconds or so, TweetDeck chirps and notifies me that some Twitter entity or another has posted something to the web.  Facebook is running and constantly updating itself with video, status updates, and one friend who is rebuking me for becoming part of the background noise.  He doesn't know that I've downloaded the Twitter plug-in that updates my Facebook status whenever I write a tweet, nor does he know that Brief, my Firefox RSS reader, keeps flashing feed updates at me for no good reason.  If I am constantly disseminating information, it is, perhaps, only as a form of purgation lest I suffer neuronal overload and slip into a vegetative state.

I can't help it.  Neither can most of us who've fallen victim.  That we will suffer enlarged prostates and blood clots in the leg brought on by our increasingly sedentary lives is of no concern.  The needle must stay in the vein at all times.

If you asked me to trace this hideous addiction, to run all the algorithms and interpolations, I probably wouldn't be able to find the seed.  I remember the old DOS games like Castle and Mosaic that I used to play as a kid, and I have a faint recollection of being comfortable with nothing more than a command line in front of me, but that was a long time ago, and all the years of wandering around in the GUI has effectively dulled those familiarities entirely. Even if I did have a better memory of the spark that lit this obsession, I can't be sure anything worthwhile would come of the knowledge.  The age of Web 2.0 has so proven so immersive that it has inevitably catapulted us into the age of Human 2.0.  Take a lesson from Lot's wife, and don't look back.

Our transition into the next world is going to be rough.  The transcendence of the next wave of technologies will be hindered by shifting climate systems, political opposition, and religious fervor, and while that might only sound sane to someone who believes it, there is little doubt it will prove true.  Success is not guaranteed.  In truth, the next one hundred years could — and depending upon whom you ask, probably will — end badly for us and with the heinous, collective whimper of wasted opportunities.  While the Green Movement is busy plotting our next generation of energy technologies, Washington and the rest of the world are moving slowly to curb emissions and create initiatives to house our future infrastructure, opting instead to plaster their cars with the right bumper stickers and their websites with the right banner ads.  But the religious zealots and climate change naysayers will win because time is on their side.  We have a couple of decades (optimistically) to stop this runaway train, and nothing short of total commitment will do the trick.

And now that I've been using Twitter regularly for a few months, and Facebook for years, I know what I've gained by the expediency of information.  In some cases, it has been very valuable.  I grab web design tutorials and typography blogs from users who post them to Twitter, and I've got enough stored up to last me a month.  I have absorbed a tremendous amount of knowledge in a very short span of time thanks to the informational paradigms under which we operate.  I get my fun fast and the news even faster, and there is always something to read, so much, in fact, that it is difficult to concentrate on any one thing for an extended period of time.  Certainly, our attention spans have suffered en masse and to a great degree.  Information will be our downfall just as it became our apex.

Evolution has  a pretty good track record for creating efficient, sustainable organisms, but hidden in that long history, of course, are all the failures and extinctions, fossilized remains of beasts that couldn't keep pace with the paradigm shifts of our planet.  When humans finally evolved, when that ultra-logical tweak entered the primate brain, the game changed entirely.  All of a sudden, brute strength didn't hold the same currency in some circles and the increased efficiency of abstract thought put homo sapiens at the top of the heap, maybe for good.

That's not to say that animals don't possess similar abilities in some instances.  I've long thought that we as humans have been unduly deferential to the intelligence of our fellow denizens, and yes, I'll even go as far as to at least partially agree with the theory Howard Bloom espouses in Global Brain: The Evolution of the Mass Mind from the Big Bang to the 21st Century that organisms exhibit a certain level of altruism.  I think this is especially true in more advanced mammals, but as Bloom argues, one can perhaps find echoes of this inherent empathy in single-celled organisms as well. [We won't get into this now.]  While we are the most advanced species on the planet and do possess certain brain powers unparalleled by other animals, the Biblical idea that provides us dominion over other creatures is both narrow-minded and selfish, not to mention fatally short-sighted.

But maybe we've gotten too smart for our own good.  Maybe we've overloaded our own brains with our technology, and yes, maybe we will eventually prove to be one of nature's mistakes — an overzealous attempt at a super-organism that went badly awry, that outgrew the planet's ability to sustain it.  Humans are nature's most astonishingly efficient virus.  We are resistant as a whole to most of it's control measures save for massive impact and our own forward progress, and after all, as our own numbers increase, so does the imminence of our demise.  The first sign of species collapse barring disease in any given ecosystem is usually overpopulation, and we might reach that point soon enough.

Until then, as the constant flow of information continues to clog our synapses, corporations and governments will continue to operate more or less nefariously, confident that their dealings will be sufficiently drowned out by the din.  They'll be right, of course, and they'll remain in charge until there is a mass extinction or another bottleneck in the human race, until the cards are reshuffled, if you'll pardon the phrase, and we'll keep running to the computer every time it chirps marveling all the while with masturbatory ecstasy at how far our technology has come since the bone knife.

If we're lucky, maybe we'll even eventually learn to use our advancements constructively and separate the notions of progress and excess from one another.  Then we can remember Human 2.0 as an upgrade instead of a fatal error.

Lawrence Lessig and the Future of the Internet

When George Orwell's famous protagonist from 1984, Winston Smith, begins to read The Theory and Practice of Oligarchical Collectivism, supposedly penned by the revolutionary Emmanuel Goldstein, Orwell writes that the best books are the ones that tell you what you already know. Granted, Winston arrives at this revelation while hiding from the eyes [and telescreens] of the oppressive government of Oceania, but despite the obvious political differences between 1984 and life in our Information Age, Lawrence Lessig's Code: Version 2.0 (also known as Codev2) often elicits the same sensation and serves, in no small part, to vocalize many of our nagging intuitions and fears about the internet.

codev2

When the World Wide Web first popped up in the late 1980s and 90s, there was a feeling that this was the new Wild West, that cyberspace would be unregulated and anonymous for the rest of its days. I was a child then, and my association with computers was limited to what I could find on the 5-inch floppy disks in my father's office, mostly DOS games like Castle or Mosaic. It wasn't until the rise of Napster, when I was somewhere in the bowels of middle school, that the Internet became a thing of interest and potential.

Lessig strikes a stark contrast between the ferocious, libertarian genesis of cyberspace and our increasingly regulated internet. In much more eloquent and nuanced terms than I can muster here, he investigates various facets of our online lives, our anonymity and privacy as well as the ways in which what happens in cyberspace relates to our lives in real space — or at least these are part of his overall thrust.  [1]

The real purpose of Lessig's book is to flesh out how we may cope with property and copyright laws in the near future and the new ways in which we will need to define our Constitutional principles where precedents simply do not exist. He spends a great deal of time on the evolution of copyright infringement in the music business and the Recording Industry Association of America's (RIAA) and other organizations' repeated attempts to nip piracy and unauthorized copying in the bud. This battle dates back to DAT tapes and VHS, but those from my generation will undoubtedly remember the day that Napster went down, a victim of the industry Goliath that finally won its suit over the filesharing network. His arguments, in part, seek to in many ways condemn the old cops 'n' robbers paradigm saying that record companies failed to change their models quickly enough and recognize the potential of the Internet to reach wide audiences. One gets the sense as well that Lessig seems to think that the old Draconian method of dealing with piracy will soon prove unsustainable and give way to an era of freer copyright and access to various works online.

His own involvement in Creative Commons — an organization that seeks to revamp online copyright by allowing authors to define the restrictions and freedoms governing use of their own work — might be a hint as to his own vision of copyright law's future as the world increasingly moves online.

The value of Code: Version 2.0, and incidentally, the part of Lessig's book that really sort of dredges up Orwell's sentiment about the best books, comes in his synthesis of the natural system of checks that governs our interactions, what he calls "modes of control" or "constraints". Neither of these terms is meant negatively, not necessarily. Lessig takes pains to describe the productive and destructive ways in which all of these constraints might conceivably play their parts. The modes of control he describes are architecture (code), norms (taboos), the market, and the law, and each one affects our association with the Internet in different ways.

To use but one example from the book, Lessig describes the potential use of something akin to a universal identity on the Internet. This would be something different than an IP address, which simply assigns a series of numbers to the computer you are using to access the web. The suggestion Lessig envisions would resemble a sort of electronic government-issued drivers' license in which websites that require certain information — for instance, age-restricted sites that sell pornography or tobacco products — would be assured that one fulfills the age requirement without that person being required to release any other personal information to the site.

Naturally, this system brings to mind at least a few questions not the least of which might deal with the logistics and security involved as well as the concern over privacy, though I have boiled down the discussion to its bare bones. These concerns are not lost on Lessig, and in each solution he posits or scenario to which he refers, he presents the pros and cons, the potential ramifications, and the boons in a very digestible way. One need not be a computer geek to appreciate or understand the concepts he introduces, and those who are well-versed with cyberspace or computer science or law will benefit from the exhaustive reference list Lessig provides.

It would be wrong of me to say, however, that Code: Version 2.0 only tells us what we already know. As Lessig points out toward the beginning of the book, there are those of us who simply use the Internet for things like shopping, banking, and email, and then there are those of us (an ever-increasing number) who spend time in cyberspace, who possess what can only be considered as lives online. Those from the latter group might have thought a bit more deeply about what it means to be a citizen of the Internet, but for everyone else, Code: Version 2.0 might present some much-needed food for thought. I can say, even speaking as a computer hobbyist and one involved relatively deeply with the Internet (notwithstanding professionals and legitimate freelancers), that Lessig's book did not fail in illuminating many issues of which I either had only a latent understanding or, sometimes, even none at all.

In this way, I suppose Lessig's observations don't exclusively adhere to Winston Smith's own feelings, but I still maintain that much will seem oddly familiar. When he cites those who have criticized his work, Lessig's refutations and clarifications sound almost like echoes from past conversations, and though much of the book is undoubtedly tinged with his own views and opinions, most of his reasoning is sound, and his knack for making seemingly complicated subjects easy to understand makes plain a relatively even-handed approach to tackling what might be some defining questions of the next ten years.

The internet is a still-evolving phenomenon, and though it increasingly encompasses larger and larger portions of our lives, it is still ill understood, both in the possibilities it creates and the concerns it aggravates. With technology on its exponential tear into the future, we are all inescapably tied to its evolution and must take an active part in deciding how we are going to shape the Internet, and thus, parts of our lives. Lessig would argue that we do have the power to affect a change and make these decisions for ourselves, but sometimes, the avenues to such change or modification of the architecture are not apparent. Even if you might not agree with all or any of Lessig's assertions about the globalization of the Web or the role of government in deciding its nature, his book serves as an invaluable springboard to meaningful consideration of the issues that confront us now and will continue to do so in the near and immediate future.

To put it plainly, Code: Version 2.0 should be required reading for anybody with a stake in the future of the Internet — and that's just about everybody.


[1] Lessig makes reference to a 1993 article by Julian Dibbell regarding early internet communities called MUDs (multi-user dimensions). These are word- and code-based environments in which players create their own relationships and surroundings using either the programming code of a given MUD or their own words via object and character descriptions. The article describes a scene from a MUD called LambdaMOO in which a player "raped" a group of people. Of course, the odd question of what might constitute rape in cyberspace brought to light many interesting and twisted questions about our relationship to this new space. It's a thought-provoking (if mildly disturbing) read.


Lawrence Lessig's Code: Version 2.0 is available as a free PDF download.