Thursday, August 29, 2013

Drivin' That Train, High on Mary Jane

The states are in a tizzy these days trying to determine the proper amount of TCH one can have in one's system before one is considered impaired.

Perhaps the most interesting part of the discussion is that no one is questioning the basic assumption: does THC impair one’s driving abilities? The levels allowed are being determined without any evidence that there is a causal relationship between unsafe driving and marijuana consumption.

Actuarial tables and state driving statistics indicate that marijuana smokers are safer than average drivers. This could be the result of one of two alternatives: either people who tend to be safer drivers are more likely to smoke marijuana than those people who tend not so safely; or smoking marijuana tends to make one a safer driver. The most relevant existing piece of data informing that question is that states that have adopted a medical marijuana program have seen an average reduction of traffic fatalities of 11%. The implication being that an increased percentage of marijuana smokers on the road creates safer conditions.

Why, then, are the states rushing to find acceptable TCH levels above which one is considered impaired? And how do they determine that level?

The “why” is more complicated; the “how” is easier: they don’t. There has been no causal relationship demonstrated between marijuana consumption and impaired driving. The assumption of impairment is based on a number of factors: people with a limited experience with cannabis tend to equate a cannabis high with alcoholic inebriation, which, conversely, most of them have experienced, and they have a hard time understanding the differences; a large anti-marijuana prejudice still exists which inclines people to believe the worst about the effects of marijuana without having actual data to back those beliefs up (the drug war under a new guise); and most commonly, people conflate laboratory results with actual driving conditions, making the assumption that lab results directly translate to driving performance.

In other words, the laws against driving while stoned are all based on unproved assumptions that are, most likely, wrong. The basic assumption is that decreased reaction time leads to unsafe drivers; and that is the basic, untested assumption. As mentioned, the only real-time test we have of that situation is in states which have legalized medical marijuana, and in those state fatalities, at least, have been reduced.

There is, though, a parallel situation with age. Laboratory tests—not to mention experience—conclusively demonstrate that one’s reaction times slow considerably as one ages; yet, paradoxically, people become safer drivers as they age. Experience helps, yes, especially the experience to compensate for diminishing skills, something marijuana smokers seem to understand inherently. Perhaps the experience comes quickly for them.

I suppose it was too much to hope for that the entire drug war should be over with in one fell swoop.

Tuesday, August 20, 2013

Equal Rights, Yeah!

The other day I was behind a bumper-sticker which read: Equal rights for all species.

My first thought was, had I run across a hitherto undetected sect of Jains, or was it some New Ager from Eugene?

My second thought was, really? E. coli? Mosquitoes? Malaria virus? Equal rights? Is that code for “no rights”? Or do the creatures have to be a certain size to qualify?

Did they think this one through?

Monday, August 12, 2013

Acid Tongues and Tranquil Dreamers

The following is an open letter to Michael White, author of Acid Tongues and Tranquil Dreamers. An entertaining read but incredibly poorly edited. It's as if no one had any idea what commas are for.

“…The fact that, in general, only the best-adapted member of a species will survive to reproductive age.”pg. 118, Acid Tongues and Tranquil Dreamers, Michael White.

Dear Michael,

My son thoughtfully turned me onto your book and I’ve been thoroughly enjoying it. I do quibble about the style sheet used to inform your punctuation, but it’s a minor point.

A somewhat larger point is illustrated by the sentence reprinted above, the one about “best-adapted members,” which itself illustrates a common misunderstanding of Darwinian evolution. As written, the sentence implies that natural selection is done at the individual level, the same error the social Darwinists make (and, frankly, most people).

The reality is that individual survival has little or nothing to do with natural selection. In general, most members of a species survive until reproductive age as one mounts the evolutionary chain. Those species designed with massive infant-death rates—frogs, say—are designed with the scatter-shot effect in mind: those who survive do so because of sheer luck, the predator didn’t find them; one has enough offspring so that who gets eaten is immaterial. By the time one gets to, say, zebras or people, most offspring are expected to survive.

Natural selection is done at the genomic level. What it says is that, on average, individuals who possess mutation X have a better reproductive rate than those who don’t possess it. It’s concerned about the overall average, not the success of any particular individual. For example, an individual might have a mutation which would allow the possessors, on average, a longer life span; but that individual might have only one offspring before being dispatched by misfortune. Nonetheless, thanks to that one offspring, the mutation could be spread throughout the species, even though it gave no benefit to the individual in whom the mutation occurred.

The confusion appears to be a conflation of pecking order and natural selection. It’s fairly understandable that, observing natural pecking orders, one thinks that the selecting done to achieve that order is the same selecting that determines the direction of evolutionary change or dominance. They are unrelated, but, unfortunately, most people equate the two, as the quoted sentence illustrates.

Pecking orders evolved, not to insure that the best genes get together, but rather to maintain order within the species. Not only did pecking orders evolve, so did infidelity, which insures that, despite pecking orders, the gene pool will continue to be thoroughly mixed. It’s the mixing which is important, not the dominance.

Thanks for your time,

Johan Mathiesen
Portland, OR

Gay Wankers

I’d like to thank my gay friends. They finally won the right to be treated as equals. Oh sure, it’s not uniform or perfect, but it’s heading in the right direction. And I don’t know why it is, but most of my gay friends—and, good God, but there are a lot of you—are also dope smokers. I think it has something to do with Stonewall. In any event, now that the flood gates are open on gaydom, dope is funneling down the chute as well; and for that I thank you.

Eric Holder’s announcement that the Feds are going to take a more lenient approach to enforcing anti-drug laws is a baby step in the direction of a stampede. He’s in danger of being run over; fortunately, he’s not ahead of the crowd. He made his announcement less than two weeks after Uruguay legalized pot and Vicente Fox reiterated his call for Mexico to legalize all drugs. I heard exactly that argument put forth on a recent NPR show.

Because it’s not just marijuana, folks, it’s any drug you can think of. Making drugs illegal is stupid. Just plain stupid. It achieves nothing, destroys people and communities, and is frightfully expensive. People on methamphetamine are fucked-up enough already without having to steal to get their fix. I’ve no doubt that most people on meth need help, but jail is anything but help. If we’re going to spend that kind of money, we should demand bang for our buck.

But the tide has turned, and it’s all thanks to those gay wankers smoking out back. It’s about time they came out; who says they can have all the fun?

Saturday, August 10, 2013

Dept. of Further Amplification: Grazing

Awhile back (June 7, 2013) in a post entitled “Man the Grazer,” I discussed findings about hominin dietary change 3.5 mya. I critiqued an article which said:

“The most significant findings indicate that human ancestors expanded their menu 3.5 million years ago, adding tropical grasses and sedges characteristic of a savannah-like environment to the typical ape-like diet of leaves and fruits from trees and shrubs of a forest environment.”

It should be pointed out that the author of the article, not the author of the study, said that. The study’s author, Thure Edward Cerling of the University of Utah, said:

“‘We don't know exactly what they ate. We don't know if they were pure herbivores or carnivores, if they were eating fish [which leave a tooth signal that looks like grass-eating], if they were eating insects, or if they were eating mixes of all of these.’”

The article’s author went on to clarify:

“He says this because the isotope method used in the new studies cannot distinguish what parts of grasses and sedges human ancestors ate – leaves, stems, seeds and/or underground elements of the plants such as roots or rhizomes. The method also cannot determine when human ancestors began getting much of their grass indirectly by eating grass-eating insects or meat from grazing animals.”

My critique centered on the probability of hominins eating grasses directly versus indirectly. I was concerned about the statement that we had added grasses and sedges to our diet.

Well, maybe Cerling didn’t say that; maybe it was the writer’s interpretation. I couldn’t find the research that was being discussed; but I did find other Cerling work, although it only discussed diets of large herbivores, not primates.

Better yet, though, I wrote him an email with my concerns and he wrote a quick reply. He wrote: “In our paper(s) we decided to present the data and make no interpretations as to the relative contributions of different food sources,” which is quite different from what the media presented. Although not finding his original paper makes it impossible to fairly compare, nonetheless, I’ll give him the benefit of the doubt.

If for no other reason than his replies made me think even more that he is on the right track. He wrote: “…Early hominins (6 to 4 Ma) had a C3-based diet (and were likely in the forest, even if a narrow forest), by 3.5 Ma they clearly were obtaining non-forest resources.” “Obtaining non-forest resources” is light-years away from “adding tropical grasses and sedges characteristic of a savannah-like environment to the typical ape-like diet of leaves and fruits”; much less, “we… make no interpretations as to the relative contributions of different food sources.”

Cerling expanded that concept even more significantly: “The earliest hominins, whose diet was purely C3-based, could have lived only in the forest (including a riparian forest).” Except for the interpretation of what foods we were eating back then, the data fit perfectly with my pet theories, thank you very much; and they are contrary to conventional wisdom. While it’s not even mentioned, much less stressed in the article or, as far as I can tell, the research, Cerling’s data completely negate the standard line that we left the trees and stood up because the forest was disappearing underneath us. Cerling fairly conclusively demonstrates that, while we stood up 6.5 mya, we didn’t leave the forest cover until 3,5 mya; which fits perfectly with the contention that it was food source opportunity which drew us out of the trees and not their disappearance. It turns out, we didn’t even leave the forest for three million years.

Cerling’s evidence does not determine what humans ate way back when, it only determines where the food came from. The important information here is the food source, not the food type; where the food comes from, not what the food is.

The truth is, it’s hard to change one’s eco-niche. The mere disappearance of an eco-niche is not enough to cause a change in a species’ niche. Usually, the dependent species will shrink with the disappearing niche or simply disappear. As the forest shrank, so did our niche; and, it turns out, we didn’t adapt out of our niche but rather shrank with it and didn’t emerge for another three million years. And three million years is enough time for our species to get thoroughly accustomed to hunting to where we could start venturing out onto the savannah where the bigger herds lived. Makes perfect sense. That scenario is simple and direct: hunters following the game. The competing theory of our continually changing our diet, accepting and then abandoning food sources as we moved through different environments, is complex to the point of unlikelihood. If we stood up to go hunting, we would have done it right where we were, and it would have taken us time to invade new territories, but at least we’d have a way to eat once we got there. If we were constantly having to find new ways to eat, it would have taken us a lot longer, I would imagine. And why would we invade a new eco-niche if there wasn’t already there a food resource we were accustomed to using? In fact, how could we? Could we have metabolized the food available on the savannah well enough to support our needs? Wouldn’t we have had to spend all day searching for food just like the other grass-eaters? Wouldn’t we have been at great danger far out on the velde eating grasses, if we couldn’t escape by running and had no weapons with which to defend ourselves? On the other hand, if we were out in the open hunting, we would have been a match for anything. Lions even. Which scenario is more likely?

It was one thing for us to let meat-eating take such a prominent place in our diet, but we were already meat-eaters before we split from the chimps; all we did was shift our emphasis; we didn’t adopt whole new food sources; but it would be an completely different matter for us to have adopted entirely new food sources—in this case grasses and sedges—only to abandon them later on. That’s asking an awful lot; and apparently that theory is promulgated by people who have a bias towards thinking humans started out as vegetarians. It is a religious argument, not a scientific one. Many people want humans to be natural vegetarians so they interpret data accordingly. Unfortunately, they make big mistakes. Like interpreting the appearance of non-forest foods in our diet to mean that we took up eating grass. That’s a stretch. It’s much more likely that we were simply continuing our hunting ways and following the game into the open.

The bottom line is that data bolster the argument that we left the trees to go hunting. It says that, when we climbed down from the trees, we didn’t leave the forest, that we kept eating food grown in the forest. It says that we didn’t starting eating food grown in the open until 3.5 mya. I can live with that.

Yet even diet isn’t the entire picture. The question of where one lives is not cut and dried. “Living” usually comprises two locations: where one feeds and where one sleeps. For predators bringing food back to the nest, where one raises one’s young is the same as where one sleeps. Predator young are raised away from the hunting grounds in a protected location. When hominins began harvesting the open grasslands, they could well have kept their camps/lairs in the forest. Most likely, they were close to water; there would be no particular need to leave the water’s edge just because one had taken up hunting the savannahs.

I’m not alone; hence, Cerling’s suggestion that humans may have evolved in a “riparian forest,” which accords with the contention that, when we left the trees we 1) began to live in semi-permanent camps as do all predators; and 2) those camps were likely down by the river.

That’s what I like about the theory: it’s clean, simple, direct, and the data all fit. Hoo-boy, what more could one ask?

Friday, August 2, 2013

Prisoner's Dilemma

In the model, each person is offered a deal for freedom if they inform on the other, putting their opponent in jail for six months. However, this scenario will only be played out if the opponent chooses not to inform.

If both "prisoners" choose to inform (defection) they will both get three months in prison, but if they both stay silent (co-operation) they will both only get a jail term of one month.

Thank you, Prof. Andrew Coleman (Leicester U.). Coleman ran a study on the prisoner’s dilemma, the classic case of cooperation or “defection,” as the author calls it. Most research has concluded that the optimal behavior for the individual is to act selfishly and defect rather than cooperate. Coleman discovered that, in the long run, selfish behavior would lead to extinction, which is why cooperation is the norm in the animal world, a reality I’ve been arguing on behalf of for many, many years based solely on observation and logical thinking. Regular readers (hi, Dave) know my mantra: the chicken is the egg’s way of reproducing itself. That was one of the great intellectual epiphanies of my life. It is entirely contrary to the American gestalt which elevates individualism and competition.

Prof. Coleman stated it somewhat lengthier: "It's not individuals that have to survive, it’s genes, and genes just use individual organisms - animals or humans - as vehicles to propagate themselves"; but the message is the same. One has to understand that to understand evolution. Coleman came to that conclusion using powerful computer modeling, which is nice, but it doesn’t take precedence over simply thinking things through. Whoever came up with the line about the chicken and the egg undoubtedly did it by thinking about the necessity of how genes have to work. Once we knew about genes, the era of individualism was dead.

Except in America.

Couple this find with the finding that our brain makes it’s decisions on a course of action (move that muscle, think that thought) before we are aware of what those decisions are—i.e. eliminating the possibility of free will—and one starts to understand how our existence as individuals is illusory: we are merely expressions of the species. We are how the species lives and propagates. We do nothing; the species does it all, up to and including our thinking. One can understand that but one cannot affect it.

Which is why giving people credit or castigating them for their behavior is absurd. No one is responsible for what they do; not you, not me, not your mother. One has to question the entire system of rewards and punishment: who is one rewarding or punishing? Why? What does one expect to gain by rewarding or punishing? How do we reward and punish?

The current system is to accumulate all you can and defend it with arms. That’s the selfish scenario; that’s the American gestalt. It’s based on the theory that the individual is the most important; and if only one can survive, the species or the individual, it’s more important that the individual survive, even if it means extinction of the species. It doesn’t take a whole lot of thinking to see that such a scenario would be disastrous for the species: extinction’s about as bad as it gets. It’s called capitalism.

Okay, we’ve seen what “defection” will do.  Anyone care for cooperation? Or is that un-American?

Thursday, August 1, 2013

Living Wage

There is a debate going on in this country about “living wages.” The wages aren’t all that living, but they would be better than what we have now, raising the national base rate from seven-something-per-hour to fifteen-per-hour.

The primary argument against it is that it potentially could cost jobs. The reality is that it has minimal effect and would probably have no effect if were uniformly applied nationally. Everybody’s boat is equally raised and the same number of jobs are required to keep the economy running, so the net effect is no job loss. What there is instead, is inflation as prices are raised so that the net income percentage of the owners remains the same. The long run solution, of course, is not to raise the minimum but lower the maximum. Oh, that’s right, we don’t have a cap. The American economic theory is that, if one doesn’t have the sky as the limit, there’s no incentive to create anything.

The conservative solution to low wages that I’ve heard several times—most recently from David Newmark on BBC radio—is to increase training.

Now, it doesn’t bother me if such a nonsensical solution is put forward, but it does bother me that the interviewer didn’t question it. That solution doesn’t address the issue being discussed, at all. By Mr. Newmark’s logic, if, say, a fry cook at McDonald’s were to go out and take a couple courses in computer programing, McDonald’s would automatically raise his or her wages. That’s all Newmark suggested was necessary; he said that fry cooks at McDonald’s don’t receive enough money because they aren’t highly trained; he didn’t suggest that the training should be in being a great fry cook or in being anything else, simply that more training was needed in order for wages to be raised.

Without putting words in Newmark’s mouth, I’m guessing that what he meant was, if this fry cook at Mickey-D’s, got trained as, say, a computer programer, he or she might be able to qualify for a better job. Which could well be true, but that’s not what the issue is; the issue is the wages of fry cooks at McDonald’s, not how to get people out of being fry cooks at McDonald’s. The argument is that fry cooks at McDonald’s should receive a living wage just like anyone else.

Well, Jesus, what a radical idea is that? What’s more important: an apartment in Dubai or that fry cook having enough money to feed his or her baby? Dubai, all the way, baby.

But if you’re interviewing one of these guys, nail him to the cross, okay? The issue is not pitiful wages, the issue is income disparity.

Thanks.