Page 2 of 8

Uber v. Taxis, and How Uber Is Like the Sun

The ride service Uber has been in some hot water lately, both with city governments and taxi drivers over what are primarily licensing issues. Cab companies complain that Uber drivers and the company itself are not subject to the same requirements as traditional cab companies and should therefore be forced to stop operating, in many instances. Some cities have sent cease-and-desist letters.

Photo by Kenny Louie, reproduced under a Creative Commons license.

Photo by Kenny Louie, reproduced under a Creative Commons license.

Now, I'm not for the abolition of all licensing requirements like some folks are. I think it's important that professionals like doctors be licensed by the appropriate boards, but in the case of transportation services, I don't see why there should be such an opposition to Uber in particular except that it has true potential to disrupt the market, and probably in a good way for consumers.

At any rate, I'm not an economist, and I don't know a whole lot about the cab business. But the rumbles that I hear coming from cab drivers resemble a hilarious piece of satire called "A PETITION From the Manufacturers of Candles, Tapers, Lanterns, Sticks, Street Lamps, Snuffers, and Extinguishers, and from Producers of Tallow, Oil, Resin, Alcohol, and Generally of Everything Connected with Lighting", or in shortened form, "The Candlemaker's Petition". Written by Frédéric Bastiat in the mid-19th century, this farcical petition lampoons pleas for protectionism by way of what is, I think we can all agree, a perfectly resonable request to the French parliament:

We are suffering from the ruinous competition of a rival who apparently works under conditions so far superior to our own for the production of light that he is flooding the domestic market with it at an incredibly low price; for the moment he appears, our sales cease, all the consumers turn to him, and a branch of French industry whose ramifications are innumerable is all at once reduced to complete stagnation. This rival, which is none other than the sun, is waging war on us so mercilessly we suspect he is being stirred up against us by perfidious Albion (excellent diplomacy nowadays!), particularly because he has for that haughty island a respect that he does not show for us.

We ask you to be so good as to pass a law requiring the closing of all windows, dormers, skylights, inside and outside shutters, curtains, casements, bull's-eyes, deadlights, and blinds — in short, all openings, holes, chinks, and fissures through which the light of the sun is wont to enter houses, to the detriment of the fair industries with which, we are proud to say, we have endowed the country, a country that cannot, without betraying ingratitude, abandon us today to so unequal a combat.

I highly recommend reading through the full-text, which is short but full of other funny gems that uncover the absurdity in many such appeals for protection from competition.

What Is Bullshit?

Spurred by a recent article in Aeon positing a "theory of jerks", I picked up an essay by Harry G. Frankfurt entitled "On Bullshit", which attempts to lay the groundwork for a theory of bullshit, or, in other words, a description of the ways in which bullshit is its own distinct entity and a phenomenon that differs from lying and other forms of misrepresentation.

Using an anecdote from Fania Pascal describing Ludwig Wittgenstein's purported reaction to her telling him she felt "like a dog that's been run over" after having had her tonsils removed, Frankfurt sketches out what Wittgenstein was implying when the famous philosopher replied, seemingly annoyed, "You don't know what a dog that has been run over feels like." What follows is a dissection of what Wittgenstein may have found objectionable about Pascal's statement, assuming he did not identify it as purely colloquial or idiomatic, nor as a lie:

What disgusts him is that Pascal is not even concerned whether her statement is correct. There is every likelihood, of course, that she says what she does only in a somewhat clumsy effort to speak colorfully, or to appear vivacious or good-humored; and no doubt Wittgenstein's reaction—as she construes it—is absurdly intolerant. Be this as it may, it seems clear what that reaction is. He reacts as though he perceives her to be speaking about her feeling thoughtlessly, without conscientious attention to the relevant facts. Her statement is not "wrought with greatest care." She makes it without bothering to take into account at all the question of its accuracy.

The point that troubles Wittgenstein is manifestly not that Pascal has made a mistake in her description of how she feels. Nor is it even that she has made a careless mistake. Her laxity, or her lack of care is not a matter of having permitted an error to slip into her speech on account of some inadvertent or momentarily negligent lapse in attention she was devoting to getting things right. The point is rather that, so far as Wittgenstein can see, Pascal offers a description of a certain state of affairs without genuinely submitting to the constraints which the endeavor to provide an accurate representation of reality imposes. Her fault is not that she fails to get things right, but that she is not even trying.

This is important to Wittgenstein because, whether justifiably or not, he takes what she says seriously, as a statement purporting to give an informative description of the way she feels. He construes her as engaged in an activity to which the distinction between what is true and what is false is crucial, and yet as taking no interest in whether what she says is true or false. It is in this sense that Pascal's statement is unconnected to a concern with truth: she is not concerned with the truth-value of what she says. That is why she cannot be regarded as lying; for she does not presume that she knows the truth, and therefore she cannot be deliberately promulgating a proposition that she presumes to be false: Her statement is grounding neither in a belief that it is true nor, as a lie must be, in a belief that it is not true. It is just this lack of connection to a concern with truth—this indifference to how things really are—that I regard as the essence of bullshit.

Here we see that bullshit does not simply entail lying, which seems, in retrospect, obvious to me. In fact, in Googling for links to this essay, I ran across a Slate piece from 2005 summarizing this very same anecdote, and which quite aptly described the sensation I had upon finishing "On Bullshit": "Eureka! Frankfurt's definition is one of those not-at-all-obvious insights that become blindingly obvious the moment they are expressed."

Of course bullshit isn't simply lying. The concept of bullshit artistry, a term Frankfurt also mentions in passing, seems evidence in favor of making a distinction between the two. Furthermore, bullshit doesn't necessarily have to be false—something that is implied in Frankfurt's interpretation of Wittgenstein's reaction but which is better summarized in the former's comparison of bullshit to bluffing:

It does seem that bullshitting involves a kind of bluff. It is closer to bluffing, surely, than to telling a lie. But what is implied concerning its nature by the fact that it is more like the former than the latter? Just what is the relevant difference here between a bluff and a lie?

Lying and bluffing are both modes of misrepresentation or deception. Now the concept most central to the distinctive nature of a lie is that of falsity: the liar is essentially someone who deliberately promulgates a falsehood. Bluffing, too, is typically devoted to conveying something false. Unlike plain lying, however, it is more especially a matter not of falsity but of fakery. This is what accounts for its nearness to bullshit. For the essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect (apart from authenticity itself) inferior to the real thing. What is not genuine need not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made. This points to a similar and fundamental aspect of the essential nature of bullshit: although it is produced without concern with the truth, it need not be false. The bullshitter is faking things. But this does not mean that he necessarily gets them wrong.

To me, this gets us a bit closer, especially his observation that the quality of the bullshit could conceivably live up to or even exceed the quality of the original product, or the truth. It was simply crafted without a mind toward being genuine. (In the case of a Folex, for example, how could it have been?)

Karl Pilkington reveals his most desired superpower: the ability to detect and publicly ridicule bullshit. From An Idiot Abroad.

Karl Pilkington reveals his most desired superpower: the ability to detect and publicly ridicule bullshit. From An Idiot Abroad.

This is a fun little essay, although, by his own admission, Frankfurt's case is not definitive. I'm not sure I'll pursue thinking about the word in this context too much more, but it's a fun little exercise nonetheless, considering the preponderance of bullshit.

Indeed, we're practically drowning in it these days, and perhaps the fact that Frankfurt was writing back in 1986 accounts for one limitation that struck me as rather glaring. There was no widely accessible internet in 1986, and flow of information was, obviously, much slower than it is today. Furthermore, kooky beliefs, surely not anathema to any period of human history, likely did not receive such constant and wide exposure as they do now. Here I am thinking of bullshit like vaccine "skepticism", UFO spotting, and much of the nonsense generated by fanatical adherence to our civic religions (e.g., partisan punditry), or even by the plain, often blameless fact of ignorance. In many cases, people who sincerely believe false information will repackage and redistribute it, sure of its truth. Though not by Frankfurt's metric, it seems this sort of information would be commonly referred to as bullshit by most people, and the designation here does not at all rely on the criteria stipulating the bullshitter's lack of concern with the truth value of their statements. It may be so that we suspect some lack of due diligence in purveyors of bad information, but the condition is not necessary to label their output bullshit.

Perhaps I missed a nuance that accounts for this instance of bullshit, or quite possibly, Frankfurt would seek to limit his theory of bullshit such that these utterances would be excluded from it. I do think, however, the usage as described above is extremely common and shouldn't be ignored. That would be bullshit.

Sperm Snacks and Sexual Selection: What Kinky Crickets and Katydids Do

Perhaps by virtue of having adequately signaled my predilections over the past few years, or maybe due to my loved ones having so long endured fevered bouts of irritating, giggly prattle, two people sought to mark the occasion of my 30th birthday by giving me the same gift, a book called Nature's Nether Regions by Menno Schilthuizen. (I favor the latter explanation, being that these people were my girlfriend and my brother.)

natures-nether-regions-coverChock full of extremely fascinating and detailed examples of reproductive machinery—which comprise a breathtaking menagerie of complex, sometimes labyrinthine, members, attachments, and orifices—Schilthuizen's book attempts to sketch out how sexual selection gives rise to the vast array of genitalia seen in nature. As opposed to natural selection, the process by which traits are selected for based on suitability to an organism's environment, sexual selection works when traits increase mating success, not due to simple survival but because those traits are desired by the opposite sex. So while a crest of feathers on a bird, for example, might not confer a survival advantage, the crest may nonetheless look pretty sexy to a female, increasing the chances that the male will be able to mate and thus pass on sexy-crest genes to their offspring.

There are a number of different ways in which sexual selection can work, and Schilthuizen does a good job of illustrating these mechanisms, their limitations, and the controversies surrounding them with requisite humor but without the sort of offputting winkiness that often drenches articles about animal sex like bad cologne. On the contrary, from a scientifically inclined layperson's perspective, I think he does a fantastic job of balancing serious discussion of evolutionary concepts and florid illustrations of genitalia with the demands of a non-specialist audience. The scientific issues are simplified and made accessible but are dutifully explained, all of which makes for an enjoyable and engrossing read.

Much as with Holy Shit, I am tempted to copy reams of descriptions for you to read here. From sharks to humans to damselflies and spiders, the ways in which penises and vaginas have coevolved, sometimes in competition with one another, are endlessly fascinating. However, the mating habits of two species of insect, a cricket and a katydid, caught my eye as delightfully weird. For the purposes of understanding the following passage, you will need to be familiar with a couple of terms.

First, sperm competition refers to situations in which males devise strategies to give their sperm the upperhand within a female's reproductive tract. In some cases, males scoop out prior mates' sperm before depositing their own; in other cases, they may divert sperm into areas within the female that are less likely to bear that seed unto her eggs. These strategies exist because females evolve, by chance, genitalia that increase their sexual autonomy, or in other words, their ability to choose whose sperm fertilizes her eggs. Males, also by chance, "respond" when mutations result in changes to their genitalia or behavior that allow them, via force, subterfuge, or other means, to subvert the strategies of females.  Females want the best quality sperm, essentially, while males want to make sure their own sperm wins out, because that is how they pass on their genes. This second concept, the "conflict" between reproductive goals, is known as sexually antagonistic selection. Cryptic female choice, on the other hand, refers to strategies employed by females to favor males whose genes are (involuntarily) deemed fit. Cryptic female choice consists of the ways in which females' genital tracts privilege some males' sperm over others, either by their very construction or by responses to purportedly stimulating features of male genitalia. Sperm dumping, when a female ejects sperm after coitus, is one such method of cryptic choice. And, last of all, a spermatheca is an organ in many female insects that holds the sperm until the female is ready to fertilize her eggs. Whereas human sperm dies in the female genital tract in two or three days, female insects can hold sperm for a much longer period of time.

Okay, now that we've got the background out of the way, you're ready for this:

Crickets also flush. Except that they don't use water but their own semen. The pretty green Japanese tree cricket Truljalia hibinonis has a huge (relative to its body length) member that it can push all the way up the female's spermatheca. When it then ejaculates, its sticky mass of sperm will force backward any sperm that·is already in there, pushing it out of the female's vagina and onto the penis shaft of the male—who then bends over and proceeds to nibble away at the freshly removed rival sperm as a postcoital snack! And in another cricket-like insect, the European katydid Metaplastes ornatus, sperm flushing is a part of sex in which male and female actually collaborate. The male inserts his genitalia, moves them in and out for a bit, and then withdraws. Since his genitalia are jagged and crenellated, pulling them out of the female brings forth a large portion of the female reproductive system, momentarily exposed inside out. What then happens is rather remarkable: the female doubles over and licks out the inside of her own genitalia, eating up any sperm packages of previous mates that still remain in there. This ritual is repeated several times before the male finally deposits his own sperm.

This last example may seem a little puzzling: why would a female join in on a male's attempts to empty her sperm stores? Again, we have to remember that a female has a vested interest in whatever the male does. A male that is able to persuade her to give up her stored sperm reserves probably has qualities that she'd do worse than to pass on to her sons. So, in some species, the whole sperm-scooping business has evolved to become incorporated into the mating ritual and, in a way, has become part and parcel of cryptic female choice—merging imperceptibly with sperm dumping.

Now, I think that's absolutely fantastic, and the rest of the book is replete with similarly captivating examples of sexual selection's effects on the genitals and sexual practices of all sorts of animals. There is also a companion website that I just found while writing this post, with what I hope are many more such examples and which I plan to peruse just as soon as I'm through the last seventy-odd pages of this wonderful genitaliary.

What Do We Do with Witches?

I am reading Steven Pinker's The Better Angels of Our Nature, which will be an act of attrition occurring over a very long period of time as I attempt to digest it in tiny pieces between schoolwork—a veritable Sarlaccian undertaking. He has been attacked and defended for this book, and for his thesis that we are in a historical period of low violence in large part attributable to the Leviathan—the emergence of strong centralized states—but also to a change in the prevailing attitudes concerning the value of the lives of others. I've socked away some critiques from both camps and look forward to reading them when I'm finally finished, but for the moment, having read a number of articles summarizing his findings, I'm inclined to buy the general narrative; most societies seem to have undergone a civilizing process that has vastly decreased the rate of violence. Your risk of being executed or murdered today is a miniscule fraction of what it was at almost any prior point in history (think on grander scales than decades).

Anyway, I have no desire to debate the merits of and injustices perpetrated by the Leviathan, for there are both. My purpose today is much more modest than all that.

"Burn her anyway!"

"Burn her anyway!"

Pinker's book thus far has laid out the groundwork for his thesis and gone into some descriptions of medieval and Reformation atrocities, along with a brief review of what we can infer about violence in prestate hunter/gatherer tribes. When he gets to the Reformation, though, the descriptions become outright grisly. Pinker pulls no punches illustrating, or letting historical accounts illustrate, the elaborate ways in which humans liked to torture other humans. I will spare you the images here, suffice it to say that the fates of hundreds of thousands of people—drawn and quartered, put on the rack, burned alive, subjected to all manners of unthinkable anguish—were fueled by what can only be called widespread, systemic sadism. That's an editorial comment from me, obviously, but these spectacles were not rare events; they were what stood in for entertainment in an era that didn't have Johnny Carson, to say nothing of the animal torture that was a central feature of many of the period's popular games.

To be sure, the Church had much to do with a great deal of this carnage. (Don't worry. Pinker follows up descriptions of heresy trials and the Inquisitions with accounts of secular torture that became institutionalized as states attempted to deter crime... or just get rid of people.) But he quotes three people in this section, Martin Luther, John Calvin, and a French scholar named Sebastien Castellio, whose words are notable either for their staggering hatred of people harboring different religious compunctions, in the first two cases, or for a modern-sounding skepticism of religious absolutes, in Castellio's case. I have cited the sources Pinker lists in his own bibliography.

Martin Luther on what's to be done with the Jews (1543):

First, . . . set fire to their synagogues or schools and . . . bury and cover with dirt whatever will not burn, so that no man will ever again see a stone or cinder of them.... Second, I advise that their houses also be razed and destroyed.... Third, I advise that all their prayer books and Talmudic writings, in which such idolatry, lies, cursing, and blasphemy are taught, be taken from them.... Fourth, I advise that their rabbis be forbidden to teach henceforth on pain of loss of life and limb.... Fifth, I advise that safe-conduct on the highways be abolished completely for the Jews.... Sixth, I advise that usury be prohibited to them, and that all cash and treasure of silver and gold be taken from them and put aside for safekeeping. Seventh, I recommend putting a flail, an ax, a hoe, a spade, a distaff, or a spindle into the hands of young, strong Jews and Jewesses and letting them earn their bread in the sweat of their brow, as was imposed on the children of Adam (Gen. 3[:19]). For it is not fitting that they should let us accursed Goyim toil in the sweat of our faces while they, the holy people, idle away their time behind the stove, feasting and farting, and on top of all, boasting blasphemously of their lordship over the Christians by means of our sweat. Let us emulate the common sense of other nations . . . [and] eject them forever from the country.

John Calvin on the false prophets among us (1555):

Some say that because the crime consists only of words there is no cause for such severe punishment. But we muzzle dogs; shall we leave men free to open their mouths and say what they please? . . . God makes it plain that the false prophet is to be stoned without mercy. We are to crush beneath our heels all natural affections when his honour is at stake. The father should not spare his child, nor the husband his wife, nor the friend that friend who is dearer to him than life.

Sebastien Castellio on John Calvin on the false prophets among us (1554):

Calvin says that he is certain, and [other sects] say that they are; Calvin says that they are wrong and wishes to judge them, and so do they. Who shall be judge? Who made Calvin the arbiter of all the sects, that he alone should kill? He has the Word of God and so have they. If the matter is certain, to whom is it so? To Calvin? But then why does he write so many books about manifest truth? . . . In view of the uncertainty we must define the heretic simply as one with whom we disagree. And if then we are going to kill heretics, the logical outcome will be a war of extermination, since each is sure of himself. Calvin would have to invade France and all other nations, wipe out cities, put all the inhabitants to the sword, sparing neither sex nor age, not even babies and the beasts.

Pinker notes that Calvin's convictions led him to advocate for the burning of Michael Servetus, a man with some reservations about the trinity, for heresy. Calvin got his wish in 1553. Castellio and many other Protestant thinkers criticized him emphatically.

Beautifying the Kindle App for Android

The Android robot is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License.

If you're anything like me, you are very particular about your reading devices. One of the big improvements moving from the older generations of the Kindle to the Paperwhite was the ability to add new fonts to the device quite easily, enabling you to pick the perfect font for whatever book you happened to be reading. Prior to that, you had to jailbreak the device.

However, despite its fantastic handling of ebooks, the Paperwhite doesn't handle PDFs very well, and I'm needing to read and annotate PDFs on the go more and more these days because that's the format in which I prefer to read journal articles. Not wanting to open up my laptop just to read a PDF, I bought a Nexus 7. (Unfortunately, most PDF markup apps are terrible, so I'm still in a bit of a pickle. Foxit crashes; Skitch allows only freehand highlighting; QuickOffice doesn't allow any PDF markup; and Adobe Reader, I hate to say, comes out on top, although it's far from ideal. Suggestions are welcome.)

But this left me with yet another predicament, as I would now be carrying around four devices regularly: an e-reader, a smartphone, a tablet, and a laptop. That seemed, well, superfluous, and as hard as it was to make the final call—because I really like the device—I have decided to essentially retire the Paperwhite and get used to reading books on my tablet. The transition was not as hard as I'd imagined. Reading on a tablet screen is in fact very nice. At this point, I can't say I miss e-ink all that much.

The Kindle app is a vexing beast, though. It's fairly nice at first sniff, but it doesn't allow the reader to share quotes to social networks the way the Paperwhite does. Perhaps more annoying yet is the inability to change or add fonts; the app uses the system default serif and sans-serif. That's it. On my Nexus these defaults are DroidSerif and DroidSans, respectively. Both are fine, but when I read books, I like fonts that are used as typefaces in print, despite having overcome almost entirely my nostalgia for the page, with a few exceptions. (I read A Song of Ice and Fire in hardcopy, for instance.)

So I found an article from Android Authority and, using the method outlined in the section titled "Manual method using file manager app," located near the end of the piece, I changed the default system serif. (Note: Swapping in these new font files will affect any app or browser that relies on them. The font I've selected is now used selectively in Chrome and the Goodreads app, among others, and this is just fine by me. In fact, it's made reading articles in Chrome slightly more palatable.) Unrooted options are available to you, but for reference, my Nexus is rooted, running Cyanogenmod 10.2.

For best results in the Kindle app, you'll want to find True Type Fonts (TTF) with the following four variants available: Regular, Italic, Bold, BoldItalic. Those should cover any formatting you're likely to find in a Kindle ebook. Personally, I recommend Latin Modern Roman, available at Font Squirrel for free. This font is actually provided in OTF format, but you can Google "OTF to TTF converter" and find a free conversion option you're comfortable with. Then you'll be able to collect the TTF files you need in one place and transfer them to your device however you'd like. (I just uploaded them to Google Drive.)

I first fell in love with Latin Modern Roman when I started using LaTeX to create PDFs. It provides a positively book-like experience, which is to say it's very easy to read, even more so than DroidSerif, itself a decent font all things considered. Sorts Mill Goudy and Baskerville are two other good options, but not all of the variants I listed are available for free.

The only downsides to using the method I did are that you'll need 1) to be careful to backup the originals and 2) to rename and transfer TTFs any time you feel like changing the font. Personally, I didn't like the font changer I tried, preferring to just download and transfer the necessary files directly, as described.

Here's a screenshot of my Kindle app using Latin Modern Roman.

The Kindle app for Android using Latin Modern Roman as the default serif

The Kindle app for Android, with Latin Modern Roman set as the default serif. Click to view full-size version.

Pretty nice, eh? Now I can get back to allowing incessant Facebook notifications interrupt my reading.

Update (02.12.2014): I only realized after flashing a new CM version, that you'll need to repeat this process every time you update. As such, I now keep the Latin Modern Roman TTF files socked away on my SD card for quick transfer in the future.

Image: The Android robot is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License.

NSFW: Swear Like a Victorian

I recently finished reading Melissa Mohr's delightful book Holy Sh*t: A Brief History of Swearing. Among other things, it's a fun, light illustration of some of the drivers behind what makes certain words taboo—granted, without discussing much in the way of formal linguistics—and should make apparent, in my view, how arbitrary many of those distinctions are.

To summarize her thesis, she divides curses into the holy and the shit: respectively, words and phrases that invoke religious ideas and bodily/sexual language. In the Middle Ages, religious oaths were thought to be much more offensive because they either forced God to bear witness to a lie, hence deceiving Him, or committed actual physical violence against Jesus, while words like fuckshit, and cunt were simply the most direct words for their referents. They most likely would not have raised an eyebrow.

During the Renaissance, however, sensibilities began to change. Religious curses, still reviled by some modern believers, nevertheless began to lose some of their shock value, making way for shit-words to claim a measure of notoriety for themselves, eventually culminating in the Victorian Era's extreme animus toward foul language. According to Mohr, this transition continues today, with one notable exception: racial epithets have now basically supplanted both the holy and the shit as the most shocking language.

Anyway, I highly recommend the book, and because I am both a lover of language and an overgrown man-child who's maintained his juvenile sense of humor despite being buffeted by the mentally erosive forces of time, experience, and a joyless commitment to pessimism, I'd like to reproduce a smattering of glorious Victorian sexual euphemisms taken from a chapter in Holy Shit titled "Gamahuche, Godemiche, and the Huffle".

Enjoy.

You'll forgive me the absurd vulgarity, I hope. File: Wikimedia Commons

You'll forgive me the absurd, gleeful vulgarity, I hope.
File: Wikimedia Commons

 

Mouse over the references to see the truly hilarious original wording of selected definitions. Also, some of these words predate the Victorian era, but the chapter seems to imply they would have been in regular, if vulgar, use during Queen Victoria's reign.

Huffle, Bagpipe: blowjob

Gamahuche: fellatio, cunnilingus

Larking: some sources claim fellatio, but Mohr seems to favor Gordon Williams's argument that it means, to put it in words she does not, titty fucking. She also references an engraving called "The Larking Cull" (1800) which shows a man doing just such a thing. You can view it at the British Museum's website!

To Tip the Velvet: either French-kissing or cunnilingus

Covent Garden Ague: venereal disease

Covent Garden Abbess: bawd

Covent Garden Nun: prostitute

Godemiche: dildo

Lobcock: a big, rubbery one

Rantallion: I must leave it to Grose here again: "one whose scrotum is so relaxed as to be longer than his penis, i.e. whose shot pouch is longer than the barrel of his piece"

Fartleberries: dingleberries

Burning Shame: Grose defines this as "a lighted candle stuck into the parts of a woman, certainly not intended by nature for a candlestick"

The following were all slang words for penis. As with many euphemisms, some of these are more innocuous than others and probably depend largely on context, which might render them either as the uncomfortable, ill-willed curses I referred to in an earlier footnote or simply as crude and goofy:

Pego
Arse-opener
Arse-wedge
Beard-splitter
Chinkstopper
Plugtail
Thomas
Man Thomas
Machine
Tool

Euphemisms for vagina:

the Monosyllable
Quim
Pussy
a woman's Commodity
Madge

Euphemisms for sexual intercourse:

Roger
Screw
Have Your Greens

Bubbies, Diddeys: breasts

Bushelbubby: a woman with large breasts

 

 

 

Julio and I Down by the Schoolyard: Stepping Down from the Grammatical High Horse

First and foremost, I'd like to thank Callum Hackett and Shannon Wittwer for their comments on this piece. You will find a couple of Callum's remarks, which he'd sent to me via email, reproduced in the footnotes. He was kind enough to proffer a few corrections and refinements.

Unless you're Dikembe Mutombo, avoid overwagging that finger.

A couple of weeks ago I wrote a Facebook status update that I want to address, if only because it allows for entry into a discussion about grammar, a topic that is widely misunderstood, and often most egregiously by those who fancy themselves grammarians. The post read as follows:

Sometimes! I want, to, read–back someone's? Statement to them, just. The way they've... Punctuated. It.

I want to expound a bit on why I wrote it, because it's really rather easy to be ruthless about language, to harbor an erroneous self-righteousness fueled by one's own presumed linguistic competence. As such, I feel genuinely bad that I may have contributed to an unfortunate tendency many English-major types exhibit: language elitism.

I’m no expert on language, English, or linguistics, first of all, so I am perhaps a bit out of my depth here. However, one of the most beneficial byproducts of having studied writing—an experience which, if you know me, is not one I often recount with dew-eyed fondness, for a number of reasons of varying validity—was an increased appreciation for the inherent ambiguity in language, an appreciation that rendered obvious just how unjustified I had been in acting the grammar scold for so many years prior. I was wrong, plain and simple.

(How did I mean that by the way? Are “wrong, plain and simple” part of a list? Or is “plain and simple” an authorial aside, signaled by a comma? If so, perhaps that last sentence would have been better written as “I was wrong—plain and simple,” or, “I was wrong. Plain and simple.” None of those are correct or incorrect, but you can see quite easily that a couple of options seem to work better than the rest. Why?)

To start, you have to realize that much of grammar you learned in elementary/high school is not "correct." It's not necessarily wrong either, but what was taught to us as a set of hard, fast rules of language were anything but: linguistic "rules" sprout from customs, and customs are subject to change. (In fact, there is no stopping this change.) Language is not math; it is not a logical system. You understand the sentences I am writing simply because thousands of years of ever-changing linguistic tradition (read: common usage) have evolved into the system of word usage and syntax we use today, one system among many hundreds of such systems, and what can be considered grammatical is more properly imagined as a gradient, rather than a set hemmed in by hard boundaries.

One way to look at grammatical as a concept is to say that something which is grammatical can be understood with relative ease without violating common practice in a way that produces friction for the listener. Here is an example:

Zoot and me run the Castle Anthrax.

You probably noticed the problem immediately: me is used in the subject where I would normally be indicated. If you remove “Zoot and,” you’re left with “Me run the Castle Anthrax.” Most likely, if you’re a native English speaker, this formation doesn’t sound very good, and might even conjure an image of a non-native speaker making a rather simple mistake. Therefore, it is ungrammatical. But it’s important to realize that using me in this position is not objectively wrong in any way. Rather, the preference for I in the subject is one based on a custom in which the subjective and objective forms are differentiated, in this case, to signal the nature of the pronoun. Another common couplet of confusion is who/whom, where who is used in the subjective (nominative) case and whom in the objective (dative or accusative). Because this differentiating doesn't do much real work anymore, it is beginning to be sloughed off, and you'll hear things like, "Sir Robin will bravely run away from whoever challenges him," all the time.

Old English (OE), in fact, exhibited a rather complex system of conjugations and declensions—think of the latter as conjugations (inflections) for nouns, pronouns, and adjectives—many of which might seem anathema to us. Yet it is out of this deep tradition that some of our current linguistic customs sprouted, not to mention about 30% of our contemporary words. (The other 70% are drawn from French and Latin, for the most part.) In OE, the form of a pronoun would vary based on grammatical gender, a custom languages like French, Spanish, and German retain; and the form of a noun or adjective would change based on its role in a sentence, rather than its position, in addition to grammatical gender. We still retain certain aspects of grammatical case from these traditions (e.g., children’s is the genitive inflection of children; he/him/his or she/her/hers are all different inflections signaling different information about the pronoun).

Take, also, our basic sentence structure in its simplest form: subject, verb, object. SVO. “King Arthur addressed the old woman.” But, in OE, if you were to tack an introductory clause onto the beginning of that sentence, it may have looked something like, “When he the crone approached, King Arthur addressed the old woman.” SOV. Subject, object, verb. But why? I don’t believe anyone really knows, but that was permissible. In many clauses, dependent or independent, word order could take on either SOV or SVO. Over time, as English changed, SVO and SVC became more prevalent, and Modern English is less flexible with regard to word order. Now, we likely equate SOV structure with archaic language, probably using it occasionally when attempting to affect the speech of a bygone era.

I have digressed somewhat from my main point, but I wanted to demonstrate, in a very brief and simple manner, that the strictures and structures of our language are, in many non-trivial ways, arbitrary. What has produced the emergent phenomenon of intelligible communication via language is practice, not logic or mathematics. And when a person is not careful, they may venture into some unsavory waters, for rooted in much grammar scoldery are a number of potentially injurious biases, among them class-based and racial prejudices against certain writing styles or "errors" and, in oral communication, dialects, accents, or speech patterns. The faulty assumption, of course, is that someone who doesn’t speak or write “properly” must be stupid, but one need think only for a moment to realize that linguistic practice is overwhelmingly predicated upon the linguistic conventions one grew up with during childhood and access (or lack thereof) to institutions that prescribed or taught the standard language. (In the United States the standard language is something close to a Middle American accent—clearly annunciated and adherent to the syntactic traditions in English currently viewed as “proper,” though you could certainly speak in standard grammar using a regional accent. The distinctions between writing and speech, or accent and dialect are important, but for the purposes of this post, I'm more concerned with linguistic practices viewed as unsactionable by presumed authorities and majority groups.)

So why does punctuation work at all?

"If it's not done by sunrise, I'll cut your balls off."

"If it's not done by sunrise, I'll cut your balls off."

Consider the following two sentences and how they are different:

The centurion saw the young man who was painting the side of Pontius Pilate's palace and thought it best to teach the lad a lesson in Roman grammar.

The centurion saw the young man, who was painting the side of the Pontius Pilate's palace, and thought it best to teach the lad a lesson in Roman grammar.

Obviously, these are very similar sentences, but they are not the same. They are both relative clauses, yes. The first sentence, however, contains a restrictive relative clause (RRC): "who was painting the side of Pontius Pilate's palace." The second contains a nonrestrictive relative clause (NRC) comprised of the exact same words, with the notable difference that they are offset by commas.

In the first sentence, the clause is restrictive because it provides indispensable information about the young man; it actually acts to identify him explicitly, as if to suggest that out of all the young men in the plaza, the young man we are concerned with is the one painting the side of Pilate's palace. You cannot remove this clause without altering the meaning of the sentence.

In the second sentence, the clause is nonrestrictive because it is providing nonessential descriptive information.  Implicitly, we already know which young man we're dealing with, but we want to point out that he was painting the side of Pilate's palace, because it provides both extra detail and vividness to the scene as well as context within which we can understand the impetus for the centurion's interference. However, if you removed the clause from this sentence, its basic function would not be changed.

Now, assume you meant to write something more akin to the second example but left out the commas so that your sentence looked like the first. There is, all of a sudden, a disconnect between the message you were attempting to impart as an author and what the reader is likely to interpret as your message. In this case, the risk of being misunderstood is rather trivial: your scene is still mostly intact, and will permit the slight shift in meaning while remaining intelligible.

With this in mind, look back at my original post and consider again why it is so jarring, prompting, as one commenter said, an unpleasant soundspace inside their head. One reason is that, contrary to what we were taught in high school, commas and other punctuation marks do signify pauses. Punctuation is, after all, rhetorical, or should contain a considered rhetorical element if you wish to write well. The standard line we were taught—that punctuation exists primarily to separate sentence constituents—is not the gilded-gold rule it was presented as. It is, however, useful as a general guide, and serves an important purpose, because, yes, knowing how sentence constituents fit together will help you decide how to construct phrases and clauses, how to punctuate them to stress, de-stress, or identify certain elements, and how to decipher what other writers you trust to be thoughtful about their punctuation might mean to say when they decide to punctuate a sentence in a certain way.

Bad punctuation leads to ambiguity, and our customs of punctuating—the common, widespread usage of those customs, that is—allow us to partially focus vague and oblique thoughts into smaller, less variable gradients of meaning. The punctuation in my original post not only violates the customs we are familiar with, it possesses no internal logic of its own. If the word order was not picked carefully, to remain clear despite the strange punctuation, the very meaning of the sentence might be even more difficult to decipher for the reader.

This post was not meant to provide a comprehensive discussion of punctuation, word order, or grammar. Not at all. The examples I've used are extremely simple ones, and you will find examples of more complex variations on these basic structure more numerous than the illustrations I've used here.

I am not above the wave of self-satisfaction that swells as a result of correcting someone's "errors" (which, really, is more akin to identifying inefficiencies or ambiguities, and sometimes, just derivations from the norm), nor am I a model of ego restraint. But I think it is very important to understand what is constructive when it comes to talking about how to use language and what is not, and you need neither to be a linguist nor to understand much more than the basic difference between descriptivism and prescriptivism to appreciate why being a grammar scold is unbecoming for an intelligent, literate person.

Controlling for exposure, do stupid people use poorer grammar than smart people? Maybe, but many smart people use poor grammar too. Practically, it's impossible to generalize. The fact of the matter is that if you grew up and received an education that placed a premium on teaching grammar, you will most likely have a better grasp of the standard language than someone who did not receive such an education or who grew up in a region exhibiting different linguistic practices. You can look no further than American Black English or Appalachian accents for such examples of language traditions that exhibit oft-ridiculed customs and rules of their own, and, hopefully, you can see that grammar scolding becomes quite illogical in light of this evidence.

For an excellent discussion of why, I highly recommend the episode of Slate's excellent podcast Lexicon Valley that features John McWhorter, author of Our Magnificent Bastard Tongue, speaking about why people 1) like to jump all over grammatical mistakes, and 2) should not to be considered savory or helpful for doing so.

The temptation to be a dick about language is strong, and yes, many people probably ought to learn to write more clearly, to acquire a command and an understanding of how language works within the larger system (in the standard language, as well as the push-and-pull between and across dialects). Ambiguity can certainly be detrimental to civil discussion. So too, though, can the militant scold.

The Worst Thing You Could Possibly Imagine

You walk into a party. Seven o'clock in the evening. People mill about a cavernous, dimly lit atrium, throngs of them, neatly picking hors d'oeuvres off trays carried by arthouse hipsters, the servers, who are clad in black and topped with the sort of odd identity-defining haircuts customarily found on people with no identities. The rumble of the crowd echoes back and forth between hard walls, off of a gray concrete floor. Booze flows, mostly wine. You are on the job; and you clutch desperately to a weeping bottle of Stella Artois.

Nothing unusual at the moment. The din can be ignored, assimilated into the standard cacophony of background noise and neurotic insanity; the crowd itself simply an environmental variable—pitching and swelling, yes, but predominantly benign.

So you swallow the rest of your beer quickly and duck upstairs with a co-worker to stroll around through the exhibits. Maybe it's the light buzz, or maybe your eyesight has gotten worse than you thought, but you have to squint and lean in to view the placards. There, you get a much needed refresher: Mondrian, Picasso, Schwitters, Dali, Bréton, Van Gogh, Wyeth, Chagall, Gaughin, Modigliani, Duchamp, Léger, Ernst. You remember their art, of course, some more than others, and you have always loved much of it, but it's nice to revisit old work and receive a few surprises along the way. Thinking about them takes you back—to college, to the sensation of purpose, to pleasant dreams. The feeling is bittersweet, dishonest in the way that nostalgia tends to be, but you cling to it. You allow the illusion that you left a piece of yourself back in another time and place, none of which exist any longer, to wash over you. After all, what could you have left behind, and where could you have left it? Suddenly, you begin to feel the weight of missed opportunities, a youth squandered on worry. And you hide the subtle—what you hope are subtle—paroxysms of woe behind a distant smile and a shifting art-gallery gaze.

This is all fine. You can handle this. This is your life.

But then the elevator won't take you to the third, fourth, or sixth floors, and you realize the fifth-floor gallery you just ingested is the only one that has been rented for the evening. You will be forced to return to one of the first two levels, though forced might not be the right word. See, you've been wanting to head back into the fray. For whatever reason, you feel drawn to some vague purpose, the nature and aim of which escape you. You feel compelled.

When you reach the second floor again, you race to the drinks table and get another beer from one of the hipsters. A Heineken. You hate Heineken, but they are out of Stella, and Amstel Light seems like an ill-defined yet potent sacrilege, even to a lightweight like you. So you take a greedy swig and turn around to survey the room.

The place is more crowded than before. The din is louder, and the food lines are backed up. As you scan the seething human mass before you, a debilitating sense of dread begins to creep into your chest. There is something different about the nine o'clock crowd, and when you put your finger on it, your heart sinks; you feel your face flush—a dry feeling, like dehydration. Your knees give way. The desire to flee is overpowering. Before you, what was once a mostly harmless gathering of older professionals, is now a group two thousand strong, and trending younger.

That last one is the fatal detail, because there they are. You can see them.

Women.

And goddammit . . . most of them are beautiful.

Mammoth Reads: Scratching the Surface of Free Will (Determino-compatibo-dualism?)

I was going to write a whole post about my take on free will... but why? I will say nothing that hasn't been said before in much more worthy fashion by people with philosophical and scientific qualifications that can't be garnered simply by idly perusing RSS feeds on Saturday afternoon in one's underwear. So I'm just going to give you a list of articles and essays I've read over the past few months that, I think, adequately parse different aspects of the free will debate. (I first heard about Benjamin Libet's experiments—just Google him, you'll find them—a few years ago, where that seed languished more or less undeveloped until recently.)

Two things before the list:

  1. Based on the current extent of my reading, I fall into the determinist camp these days, and I don't believe that, given the same conditions, we can choose other than we do.  Even if, as some have posited, random events complicate this statement, I don't see where freedom or control exist in indeterminacy.  Either way, we're beholden to events of which we have little or no knowledge.
  2. I am now retroactively mildly embarrassed by some of my previous comments on inhibition, originally written in response to a post by Robin Hanson at Overcoming Bias regarding culpability in sleep rape. My suspicion of the punitive instinct stands, as does my reluctance to equate waking and sleep states, but my current thinking on free will demands that I revise my insistence on an agent's having a choice to prevent or allow an action to take place once that person becomes aware of his/her behavior. Though so-called "free won't" isn't an entirely unhelpful concept, I'm backpedaling now on my insinuation that inhibition is a controlled reaction (duh). In my post I cited an article entitled "Do Conscious Thoughts Cause Behavior?", of which I only ever read about half, maybe a bit less, before deciding I understood Baumeister's point (read: tired of it). Conscious thoughts may have a role in decision-making, but they are determined as well—even if consciousness itself is astoundingly complex—and the experience of awareness is merely a byproduct of brain functions that we for the most part do not perceive. However, the review article does point to and attempt to counter Thomas Huxley's steam whistle hypothesis and, in so doing, perhaps unwittingly provides what I think is actually a pretty splendid shorthand for how consciousness probably works.  I provide, with caveats mostly irrelevant to this already overly long list item,  Baumeister's explanation of Huxley's analogy: "It [the steam whistle hypothesis] says conscious thought resembles the steam whistle on a train locomotive: it derives from and reveals something about activity inside the engine, but it has no causal impact on moving the train." There is a larger discussion to be had about the proper role of punishment in light of an increasingly nuanced understanding of consciousness, but I thought it important (for me, at least) to outline where I feel I erred in my original criticism of Hanson.

Ok. The list that follows is presented in whatever order will strike me as appropriate during the following minutes, suffice to say the first two are my favorites.

James B. Miles. 'Irresponsible and a Disservice': The integrity of social psychology turns on the free will dilemma. British Journal of Social Psychology.
Miles criticizes the view that knowledge of a lack of free will would send society into an amoral/immoral tailspin and that people would, by definition, become selfish cretins.  Largely a criticism of social psychology and its relationship with free will (as Miles puts it, philosophical libertarianism, which is distinct from the political philosophy of the same name), this paper also provides a nice summary of determinismcompatibalism, and libertarianism, essential concepts to understand in order to appreciate the debate.

Sam Harris. Free Will. Simon and Schuster.
In what is, besides Miles's paper, my favorite piece of the bunch, Harris publishes what he claims will be his "final word" on his opinions regarding free will. This is a highly digestible essay that tackles a number of issues, from culpability to legal implications and personal understanding. (If free will were truly an illusion, we'd have to accept, as Harris says, that psychopaths were simply unlucky to have been born as they were.  Thus, hatred could not be warranted, nor could cruel punishment.  Presumably we would do what is necessary to protect society from murders, rapists, etc., and forego the revenge instinct, which is a programmed survival reaction but one that is not coherent once we take agency out of the equation.)

If you don't ever read this piece, do one thing that he suggests therein: sit down one day and simply pay attention to how your thought process operates. Thoughts just pop in there, to cop a Ray Stanz line from Ghostbusters. How could they do anything but?

(UPDATE: 4/14/2012)
Joshue Greene, Jonathan Cohen. For the law, neuroscience changes nothing and everything. Philosophical Transactions of the Royal Society B: Biological Sciences.
Unfortunately, I read Greene and Cohen's article after writing this post, and it may be, as far as the practical implications of a modified conception of free will are concerned, the most interesting one on the list.  Drawing a distinction between consequentalist and retributivist compunctions, the authors argue for the former as a progressive, scientifically justified view of punishment rather than the revenge-driven legal system that seeks to wring remorse out of the prisoner, or right some cosmic moral scale.  The law's default disposition, compatibalism, will be challenged as advances in neuroscience make the causal chains that lead to decision-making more apparent; with this advance knowledge, a system aimed at punishment, rather than prevention, will seem untenable.

Their basic position may be summarized thus:

Existing legal principles make virtually no assumptions about the neural bases of criminal behaviour, and as a result they can comfortably assimilate new neuroscience without much in the way of conceptual upheaval: new details, new sources of evidence, but nothing for which the law is fundamentally unprepared. We maintain, however, that our operative
legal principles exist because they more or less adequately capture an intuitive sense of justice. In our view, neuroscience will challenge and ultimately reshape our intuitive sense(s) of justice. New neuroscience will affect the
way we view the law, not by furnishing us with new ideas or arguments about the nature of human action, but by breathing new life into old ones. Cognitive neuroscience, by identifying the specific mechanisms responsible for behaviour, will vividly illustrate what until now could only be appreciated through esoteric theorizing: that there is something fishy about our ordinary conceptions of human action and responsibility, and that, as a result, the legal principles we have devised to reflect these conceptions may be flawed.

Green and Cohen's distinction between what law wants (retribution) and what people will want (compassion, or consequentalism) largely drives their assumption that certain types of large-scale change will be unavoidable, and warranted. However, they remain of the opinion that law has been molded in such a way that it may incorporate these new findings, as well a shift in philosophy, without requiring an entirely new framework. Rather, the mechanisms of the system may simply be applied in a manner accordant with an understanding of free will and culpability informed by the latest science.  Libertarianism is out, and soon, compatibalism will be, too, they say.

For fans of thought experiments, the Boys from Brazil problem is an absolute must. Are we really so different than Mr. Puppet?

Massimo Pigliucci. The Incoherence of Free Will. Psychology Today.
I haven't included any writing by Daniel Dennett, a prominent determinist, because I haven't read any yet. But both Pigliucci and Harris mention his concept of a "free will worth having," an explanation that Pigliucci summarizes as follows:

What all of this seems to suggest is that the undeniable feeling of "free will" that we have is actually the result of our conscious awareness of the fact that we make decisions, and that we could have — given other internal (i.e., genetic, developmental) and external (i.e., environmental, cultural) circumstances — decided otherwise in any given instance. That’s what Dennett called a type of free will that is “worth having,” and I consider it good enough for this particular non-dualist, non-mystically inclined human being.

Whereas Pigliucci likes this explanation, Harris, in Free Will, accuses Dennett of "changing the subject." Regardless of this disagreement (in which, for the record, I side with Harris at the moment), Pigliucci does a nice job of tearing down dualist notions of free will and summarizing reservations many people tend to have when they are forced to consider that they may not have control over their actions in ways they previously may have assumed.

Kerri Smith. Neuroscience vs philosophy: Taking aim at free will. Nature News.
This is a traditional news piece over at Nature News that describes the respectively different treatments free will receives from scientists and philosophers. Most neuroscientists, the article states, are content to attack dualist notions of free will without considering the more robust philosophical debate that surrounds the issue. Philosophers, however, must explain how freedom to choose otherwise might exist in a causal physical system such as the one in which we, and our brains, exist. One of the large issues has been a lack of consensus regarding a working definition of free will from which further research and rationalization can proceed. For any of you interested in Libet's research, as well as recent studies that confirm and build upon knowledge of unconscious decision-making, the article cites and summarizes a few references. I've only read summaries myself, and I'm not sure full-text is widely accessible.

Björn Brembs. Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates. Proceedings of the Royal Society B: Biological Sciences.
Of all the pieces included in this list, I am farthest removed from having read this one.  I have to go back through my highlighted PDF to refresh my memory, but Brembs, in searching for a different scientific understanding of free will, rejects both the dualist and determinist approaches.  Instead, he conjures quantum indeterminacy:

That said, it is an all too common misconception that the failure of dualism as a valid hypothesis automatically entails that brains are deterministic and all our actions are direct consequences of gene–environment interactions, maybe with some random stochasticity added in here and there for good measure [2]. It is tempting to speculate that most, if not all, scholars declaring free will an illusion share this concept. However, our world is not deterministic, not even the macroscopic world. Quantum mechanics provides objective chance as a trace element of reality. In a very clear description of how keenly aware physicists are that Heisenberg's uncertainty principle indeed describes a property of our world rather than a failure of scientists to accurately measure it, Stephen Hawking has postulated that black holes emit the radiation named after him [11], a phenomenon based on the well-known formation of virtual particle–antiparticle pairs in the vacuum of space. The process thought to underlie Hawking radiation has recently been observed in a laboratory analogue of the event horizon [12,13]. On the ‘mesoscopic’ scale, fullerenes have famously shown interference in a double-slit experiment [14]. Quantum effects have repeatedly been observed directly on the nano-scale [15,16], and superconductivity (e.g. [17]) or Bose–Einstein condensates (e.g. [18]) are well-known phenomena. Quantum events such as radioactive decay or uncertainty in the photoelectric effect are used to create random-number generators for cryptography that cannot be broken into. Thus, quantum effects are being observed also on the macroscopic scale. Therefore, determinism can be rejected with at least as much empirical evidence and intellectual rigor as the metaphysical account of free will. ‘The universe has an irreducibly random character. If it is a clockwork, its cogs, springs, and levers are not Swiss-made; they do not follow a predetermined path. Physical indeterminism rules in the world of the very small as well as in the world of the very large’ [9].

Brembs studies invertebrates, and in this paper he is most concerned with concocting working models of behavioral variability. Quantum mechanics is often used to obscure dishonest claims, to shield them behind the shroud of mystery the indeterminate world underlying our existence provides, but Brembs doesn't seem to be invoking it in such a way. Citing some interesting studies with fruit flies and leeches, he illustrates cases of seemingly spontaneous decision-making and behavioral variability in invertebrates exposed to controlled constant stimuli. It could be said, of course, that each action affects subsequent actions deterministically regardless of the constancy of the stimulus, but I'm already out of my ken with this entire post.

I highly recommend Brembs's paper.

Eddy Nahmias. Is Neuroscience the Death of Free Will? The New York Times.
This article doesn't impress me much, as Nahmias seems more concerned with rhetorical gymnastics geared toward discussing an alternative definition of free will than he is dealing honestly with the gruesome holes neuroscience is poking in the formulation most people probably consider when the term is used: that we are autonomous agents able to choose between one or more outcomes and that, given the chance, we could choose otherwise.  (In other words, most people would probably concede that our actions are in some way affected by the physical characteristics of our brains, but unprompted, my guess is that most of those people would also likely assert that they are able to control the trajectory of resultant actions once conscious awareness sets in.)

Nahmias instead posits the following definition, which sounds a bit like Dennett's:

These discoveries about how our brains work can also explain how free will works rather than explaining it away. But first, we need to define free will in a more reasonable and useful way. Many philosophers, including me, understand free will as a set of capacities for imagining future courses of action, deliberating about one’s reasons for choosing them, planning one’s actions in light of this deliberation and controlling actions in the face of competing desires. We act of our own free will to the extent that we have the opportunity to exercise these capacities, without unreasonable external or internal pressure. We are responsible for our actions roughly to the extent that we possess these capacities and we have opportunities to exercise them.

But what Nahmias seems to have described is consciousness, not free will. For him, free will represents the opportunity to perceive and experience the various facets of consciousness, including the prepackaged illusions with which it shipped, free from coercion.  To me, it sounds like we're now talking about something else altogether. But, read it for reference, if you'd like.

SHOWDOWN: Pigliucci v. Coyne
The following three articles represent a public discussion (argument) between Massimo Pigliucci and Jerry Coyne, in which Pigliucci takes a compatibalist tack while Coyne defends from the determinist's corner.  (Actually, Coyne started it with an op-ed for USA Today.)  Coyne tends to rub some people the wrong way, even those who might otherwise agree with him, as he's often both quick and a bit fervent on the draw.  Still, I think any notion of free will must describe how that will circumvents or changes the outcome of physical events, for it seems any will beholden to physical laws as we understand them would exist within the broth of causal relationships that surround it, however obscured by complexity those relationships might be.  At any rate, I'll preempt the list of articles with the observations of a commenter on Jerry Coyne's second piece, Ron Murphy:

Given that there is no evidence yet of anything that might be considered non-physical then the null hypothesis is that everything does follow physical laws. We are looking for exceptions. On this basis the dualist notion of free-will is the alternative hypothesis, and that it does not exist is the null hypothesis.

That free-willies think that theirs should be the null hypothesis is based only on their ‘feeling’, the historical and personal notion that we have free-will. But ‘feeling’ that something is the case doesn’t cut it as sufficient evidence, or reason, to think that it should form the basis of the null hypothesis.

Whether it is traditional dualist free-will (or soul), or even this other form of free-will that is supposed to be non-dualist and yet has no concrete explanation, they are both alternative hypotheses awaiting even a good definition let alone the possibility of falsification.

The illusory nature of free-will is simply the common sense view derived from all we know about the universe so far. That some of us don’t like it, and even that all of us appear to act as if we have free-will, is no support for that alternative hypothesis, whichever way it is framed.

So now that I've attempted to prime you in favor of Coyne—oh, how dastardly of me—the respective parries and thrusts:

 

My Dubious and Tenuous Conclusions (*grain of salt not included)
As I said, I'm siding with the determinists for the time being, with no less wonder than I've ever had in this weak heart, with no less awe in this shackled mind. I will caution those who may be tempted to view a deterministic world as one in which actions mean nothing to differentiate determinism from fatalism (Harris makes this important distinction in Free Will). Were we all to stop acting, the world would be an unrecognizable place. A lack of free will, in the magical terms it has most often been described, is no cause for despair. After all, you have no choice but to live the illusion. And should free will one day be unequivocally proven to be such an illusion, you would not find yourself in a different plight than that of all prior generations. You would simply be better acquainted with your nature.

Personally, I find the concept "freeing" in a certain sense: my actions, good, bad, or indifferent, occur due to factors beyond my control, and in recognizing this I can "choose" to bend my "will" toward changing those actions and characteristics I don't like. I can show more compassion to others and deal better with their faults and shortcomings, knowing that I too am in the same boat. The instinct to tune out and give up does not, to me, hold much appeal. I feel, for whatever reason, that a path to self-improvement is more evident, which is not any sort of proof; it's just the manner in which I've come to view the matter. Some will view it otherwise.  But as you've no doubt noticed, our language is currently unequipped to deal with the implications of such a shift in thinking: I was unable to write this paragraph without the insinuation of agency and choice—a fact that may, admittedly, say more about my prowess as a writer than anything else. I have slipped, as I often do, into incoherence.

There is simply no escaping the storm and, as always, much more reading to be done.

Are Independents Just Closet Partisans?

NPR came out with a gem of an article last week that utterly confuses, I think, both the role and ethos of the independent voter (read: I took it personally). The strawman presented is someone like this: a voter who would like to believe they are an independent-minded person doing their civic duty by refraining from endorsing one political party or another, but who is in fact secretly so totally besties with either the Democrats or Republicans (more likely the Democrats, according to the article).

Despite this overarching misanalysis, the article does somewhat aptly address the myth that independents are swing voters at the core—the misconception that a candidate can gain crucial ground with self-described independents, presumably middle-of-the-road folks, by tailoring a message to appear less strident than the party's base would like. I've resorted to this fallacious thinking a bit myself, often reasoning that, in a presidential election, a candidate attempting to win moderates can afford to soften the rhetoric, as the base will never abandon their horse; for instance, no flag-bleeding conservative Christian will vote Democrat in 2012, even if, assuming he wins the nomination, Mitt Romney decides to tattoo a barcode on the back of his neck and shave his head. There may be some truth to this, but it would probably be a mistake to think that moderate independents represent a large enough voting bloc to make a significant, or at least bankable, difference. By the latter reasoning, a candidate might be better off concentrating more intently on the base rather than on the margins. Speculation abounds, including my own. I'm just sayin'.

I digress.

What really chapped my ass was the article's attempt to call independents who vote for major parties "closet partisans," as if a party preference somehow strips the independent of their claim to the title suggested by their affiliation. Are some independents closet partisans? Undoubtedly. But to say that one is a loyal Democrat or Republican (essentially what the article claims) and not an independent because they hew to the lesser-of-two-evils mentality during general elections is flat wrong. It fails to account for the difference between political and voting philosophy, which is to say one might adhere to a political philosophy not engendered by the major parties but, when it comes time to vote, chooses the candidate he/she considers to be less destructive. (Campaigns appear to have hordes of believers, and to be frank, I think this was the major problem with Obama's 2008 campaign.  At the end of the day, I suspect most people view their meager choice as the opportunity to pick the person who will fuck up less.)

While I might not agree with the prior voting mentality as described, I don't think it's fair to call these folks partisans. Third-party candidates often leave as much to be desired as do their major-party counterparts. I'd probably consider voting libertarian, but I won't vote for Ron Paul even though I respect his candor and gumption, and very much would like to see some of his agenda receive bigger play on a national stage. In 2008 I voted Green Party, but given the chance, I doubt I'd want to cast another vote for Nader.  (Thankfully, he's not running again.)  Neither of those parties, however, reflect my political views exactly; in many ways, they're opposed to one another. But each addresses issues mainstream politicians are unwilling to tackle, and knowing full well that any vote is a vote of compromise, I could still be persuaded should the right candidate come along bearing a third-party standard. (Really, I'm a cheerleader for alternative parties and vote for them often, but in the end we're voting for people, remember, and voting for someone you really don't like ravages the psyche.  Kerry, for one, will be on my soul until the heat death of the universe.) Yet my choices in November 2012 may come down to Romney, Obama, or No Confidence, and while my whimsical little heart might like to think the last of those options would be a resounding fuck you to the big, bad establishment, I may find myself in a different mood when I crawl behind the curtain to do my, mostly symbolic, civic duty. With those unenviable choices in front of me, who would I pick? Well, personally, I recommend castration for anyone who winds up in Mitt Romney's corner, and I'd like to keep my anatomy in tact for as long as possible. Plus, I live in Illinois, so my vote will count even less this year than it normally does. Maybe I'll just write in Indiana Jones, or Hannibal Lecter.

Would that make me a Democrat? Hell no. Still, I most likely won't be able to bring myself to tick a D on the presidential ballot.

The more rampant ideology I run into, the more suspicious I am that its holder has willingly entered an echo chamber. Is—ought remains the implacable philosophical hurdle. Once we've proposed an ought, we've made a choice that, no matter how much we may think so, does not by default follow from the is. We begin to make moral distinctions. So politics is the belief system by which we argue with one another from our respective patches of mucky, infirm ground. By avoiding the neat designations and party machinery, I think many independents put themselves in the position to avoid the immediate knee-jerk reactions and group-think that true partisans display.  But that doesn't preclude them from voting Democrat or Republican, or favoring one or the other, nor should it call upon them the insinuation of partisanship.

Forgive the lame ice cream analogy.  Someone always makes one and, today, it's my turn: if you give me a choice between chocolate and vanilla, I'll pick vanilla most of the time.  What really gets my blood pumping, though, is cookie dough... or cookies 'n' cream... or mint chocolate chip.  Sometimes I just can't decide.

 

(Before I finished this post, I came across one by Will Wilkinson on Big Think entitled "Politics vs. Empathy," in which he outlines a study that found subjects' did not project their visceral-state feelings onto those perceived as dissimilar, specifically those of different political affiliation.  Prior research has shown much the same thing, and it's worth taking a look at Wilkinson's brief description, where he provides a link to the full-text and an explanation of what sorts of projections came into play.  To me, this is a neat little anecdote that reinforces my suspicion of political group affinity.)

 

Christ v. Christ

St. Mary Magdalene, Exford*

While writing twelve pages and almost 5000 words in an attempt to study for an exam (I say this not to complain but to highlight my gross inefficiency) on the Old and Middle English periods, I came across a passage in our textbook (A History of the English Language, Fifth Ed., by Albert C. Baugh and Thomas Cable) that I remember chuckling at darkly the first time I read it a few weeks ago.  The passage refers to the replacement of the native English clergy with continental French implants in the years following the Norman Conquest:

Ecclesiastics, it would seem, sometimes entered upon their office accompanied by an armed band of supporters. Turold, who became abbot of Peterborough in 1070, is described as coming at the head of 160 armed Frenchmen to take possession of his monastery; and Thurston, appointed abbot of Glastonbury in 1082, imposed certain innovations in the service upon the monks of the abbey by calling for his Norman archers, who entered the chapter house fully armed and killed three of the monks, besides wounding eighteen.

It's true that Christianity has been responsible for much higher learning throughout history, certainly in pre-scientific times, but, for many reasons, the vision of bishops seizing control of monasteries with large gaggles of armed men in tow seems appropriate.  What's more, these were acts of religious cannibalism:  the zealous bishops described in this here tale were marching on fellow Christians, organizations within their own establishment.  Thurston, the dog, even killed a few monks for good measure.

So, yes.  Just a bit of atheist/anti-theist candy for the road—for those of us atheists, anyway, who aren't particular fans murder.  I don't want to speak for everybody, after all.

 

Image: St Mary Magdalene. Exford (Janine Forbes) / CC BY-SA 2.0

 

Taking the Driver out of Driving

I was tempted to place the title of this thing in the imperative, but I didn't want to be pushy.  More than that, doing so would serve mostly as a cheap provocation to arouse the ire of a friend of mine who contends that Americans, indefinitely, will refuse to cede control of their vehicles to computers, sensors, and robots.  Something in the American Spirit, he says, will successfully dismantle the widespread adoption of technology that would drastically reduce loss of both life and wealth.  (I know car buffs will lament loudly, and you must know, for the sake of openness, that I'm disinclined to give that reality priority over more important considerations.  Sorry, car buff friends.)

I'm not so sure.  Besides supporting automated driving and all it might one day offer—really, continuing to allow humans to pilot automobiles would be insane in a society with a viable alternative—I'm inclined to think a driverless society is a foregone conclusion, whether by law or choice, though I hope the latter prevails.  Provided cheap, efficient, and reliable technology, I see no logical reason (though I can think of a few illogical ones) that Americans as drivers would forcefully oppose a transition away from self-piloted vehicles.  (My argument here, it should be said, assumes a technology that meets those conditions.)  The upside is simply too great;  just as many folks at first rejected the automobile, eventually the advantages it provided made previous technologies obsolete.  The same, I think, will be true for automated cars.

Wired has an interesting piece regarding the myriad legal questions we will need to grapple with, and I have no doubt that there will be a number of growing pains as we struggle to adapt to the shift.  But safety and economic considerations will, after a time, decide the matter.  One day, the excuses will simply run out.

« Older posts Newer posts »