Wednesday, November 27, 2013

Christopher Hitchens

I miss Christopher Hitchens, I do. He was a pit bull who would give no quarter. He'd get an idea by the throat and shake it to death. He was fun to watch. He was erudite to a fault and wasted no time letting his opponents know precisely that. Barrel-chested with thinning hair often bunched up on top of his head like an afterthought, he could go through nearly a pack of cigarettes during a debate or lecture, “no smoking” signs be damned. It was that smoking that killed him.

During a debate he would lean back in his chair and turn his body away from his opposition as if even being in the same room with a believer was slightly discomforting. Rarely would he look them in the eye as his rapier wit and condescending tone delivered death by a thousand cuts. He was obsessed.

Sometimes, that obsession would blind him to the bigger picture. His concentrated antipathy to religion blocked him from seeing that often religion is the instrument of oppression, not the origin of it. He failed to see religion as a tool for the more prosaic cause of greed. He never once mentioned the herd instinct—more delicately referred to as “peer pressure”—which underlies the binding power of religion.

The power of the herd instinct, for that matter, is overlooked by all the Four Horsemen of the New Atheism. Each one of them makes the mistake of attacking religion from the point of logic, which is all well and good and terribly easy but of limited effectiveness. The value in the Four Horsemen is not that they spread enlightenment though the land—it's not enlightenment which wins the day—it's that they spread the impression of the popularity and hipness of atheism. It's not important to deliver stunning arguments and inevitable conclusions; it's important to give the impression that the opposition are anachronistic buffoons and hopelessly out-of-date. That they're intellectually bankrupt is a bonus and a not terribly convincing argument to the hoi-polloi, it's fashion that counts.

This was apparent when Hitchens debated the Oxford don, Alister McGrath. He (Hitchens) consistently characterized violence, such as suicide bombers, as religious conflict, when, in truth, it's territorial contests using religion as a motivator. The problem with religion is not that it's wrong; it's that it's a glue used to bind people to corrupt causes. Hitchens made reference to that but failed to see its centrality. The problem is not religion, it's the herd instinct. We needn't teach people about the dangers of religion; we need to teach the dangers of blind obedience. One can readily see why the state doesn't want to get into that business; it would just as soon teach blind obedience.

The great power of Christianity is not in its stories or explanation of the world around us; it's in its fanatical obsession with power structure, as evidenced in the first five Commandments. That, plus its promise of salvation after one's death, made it the ideal religion for governments and capitalism. Despite the myth of Christ and the money-changers, the operating sentiment here is “render unto Caesar what is Caesar's; render unto God what is God's”; “Caesar,” being interpreted to mean, “anyone above you in authority.” Constantine was no fool; there were scores of religions competing to replace Roman paganism when he selected Christianity (although, only half-heartedly, it must be admitted) to be the new state religion; not to mention that he moved the seat of power to Byzantium.

I wish Hitchens would have pressed this point more, the utility of religion, rather than narrowly focusing on its falsehood.

And I wish he'd understood better the nature of morality being a natural animal trait that evolved along with the rest of what we are, such as love, loyalty, wariness, fear, anger, etc. Those emotions are not restrained to us; they didn't first appear with the advent of modern humans. Hitchens badgered McGrath over what sort of morality did humans have before Moses brought the Commandments down from Mt. Sinai; and McGrath responded with the notion of “natural morality,” which he ascribed to pre-Christians, as well as those who don't know of Christianity (this during the debate, when McGrath argued for the necessity of god-given laws); but Hitchens failed to clinch the argument by pointing out that “natural morality” was the source of the Commandments, not god. He was so caught up in proving the evil nature of religion that he skipped over its organic nature, how it grew, not just as explanation for the world around us, but because of its cohesive properties in holding communities together. Hitchens failed to see that attacking the intellectual framework of religion was tilting at windmills; he needed to replace the very ground they stood on.

Wednesday, October 9, 2013

Manual Dexterity and Other Mexicans

From rawstory.com, 10/9/13

"Scientists claim that big toes and thumbs evolved in parallel"

“New research from the RIKEN Brain Science Institute indicates that, contrary to current belief, early hominids developed finger dexterity before they became bipedal. The long-standing theory that bipedalism “freed up” the proto-human hand for using tools has been overturned by brain imaging and fossil evidence that indicates that the quadruped brains possess the same potential for manual dexterity as human.”

“‘In early quadruped hominids, finger control and tool use were feasible, while an independent adaptation involving the use of the big toe for functions like balance and walking occurred with bipedality,’ the authors wrote.”

•••

If you’ve read this blog with any regularity, you’re aware of my theory that humans descended from the trees to go hunting. The study mentioned above supports that contention. Mainly, it supports the observation that our ape and simian cousins also have finger dexterity; but it’s fairly obvious that one has to have said dexterity in order to wield weapons, which both us and chimps do. My ignorance was that I wasn’t aware that the field thought finger dexterity came after bipedalism; I’d never heard that. My question would be, having observed other primates, from where did that idea arise? It’s not self-evident.

Ah, the mysteries of science.

Tuesday, October 8, 2013

Not So Fast, Buddy

He began by complaining about the potential of an increased national minimum wage; said that it would cost a lot of jobs. When pointed out that experience has shown otherwise, it didn't faze him. He said that a high minimum wage prevented him from hiring a couple/three college kids over the summer. When I argued that raising the minimum only feeds inflation and that what we needed was, not an increase in the minimum, but a severe decrease of the maximum, he wasn't so sure that would work. “I'd just stop creating jobs,” he said.

It's a common mantra: if there wasn't the incentive of wealth, most people would sit around doing nothing; at least that's what they tell me. If they couldn't make a bunch of money, they wouldn't do it. Furthermore, they argue, that if it weren't for the incentive of money, most of the comforts of today wouldn't be here; people would stop creating. Our modern society is here thanks to capitalism. The alternative, they suggest, is Stalinist communism; which, they further suggest, is pretty much what's happening in Europe, and look at Greece. Quite how that all gets put together is somewhat mysterious.

It raising many interesting questions, not the least of which being, is it only a disparity in income which makes people creative? Isn't that what's being argued, that the opportunity to make a bunch of money is the spur to creativity? It's not just be able to lead the good life and being happy with everyone else leading the good life, too. “Leading the good life” implies someone is leading the bad life, or, at the very least, the average, everyday life. “Leading the good life” knows that there's not a Mercedes and a home in the Hamptons for everyone.

Just imagine for a moment if everyone had the same access to everything, the same trips to the Alps, the same enrollment in Harvard, the same sailboat. Would there be anything wrong with that? No room for all those sailboats and everyone at Harvard, huh? But if in a perfect world everyone had the same access to all resources, would that bother the rich who now have access to limited resources? Is it important for the rich to have something that everyone else doesn't have? Is it important to have higher status and privilege than others? I would guess so.

Then the question becomes, are all—or, at least, most—advances in modern civilizations—ours, say—created by people wanting to have higher than average status; and, if having higher than average status wasn't possible, would people stop being creative? A lot of people think so.

I think back to Ugak. You remember Ugak; he's the guy who around two-and-a-half million years ago, give or take a million, discovered that, not only did this particular kind of rock make a sharp edge when it broke, you could control how the rock broke; which started a whole industry of people whacking on rocks to make cutting edges. Wasn't that handy? But, you know? I don't remember Ugak getting anything special for that discovery other than the fine robe his wife made him. Thank God, she had good teeth.

And when Mugapup figured out how to make needles out of cactus spines, no one had a special dinner in her honor, that I recall. We really should have done something.

So, I'm wondering, when did it start that people had to get special status or they wouldn't share their discoveries? I know, I know, everyone wants the best cut of meat; but when did we decide that some people would only get leftovers and some people wouldn't get any meat at all? When we start that?

And remember when everyone simply lived in their own house, their own wigwam, their own yurt? When did it start that, once you got your house together, someone else owned it and you had to give them stuff all the time in order to live in your house? When did we think that was a good idea?

Not to mention, remember that rock pile from which we get all those groovy rocks that break so nicely? When did we say someone could “own” every rock that came out of that pile and we'd have to give them some of our stuff if we wanted one of those rocks? Was that such a good idea? How come they get to decide who gets a rock or not? Or how come they horde all the good rocks and give us the tailings? Who thought this was a good way to do things?

When did we decided we were no longer one big family and we'd better stick together? When did we decide that it was all right if some people were less family than others?

I will grant that rampant capitalism feeding an unbridled consumerism has shaped the world we live in. I'm not sure we're leading the best possible life we could be leading as a species, thanks to that consumerism/capitalism, but there is a lot of luxury out there. The fact that massive amounts of our resources have been given away for the creation of those luxury items to such an extent that much of our species lives in dire poverty doesn't seem to get accounted for when enumerating the benefits of capitalism. The fact that the next iteration of the iPhone is more important to us than ending slavery speaks volumes about our culture. Would it hurt us if those people who make continually altered products that we simply have to have, weren't inspired to keep on doing it? Do we think that science and advancement of the species has only happened under capitalism and would cease if capitalism ceased? Are we that naïve?

Poor us.

Yet, as the wife of my conversation partner chimed in, equitable resource distribution “will never happen.” The implication being, I suppose, that, if will never happen, why bother? Ah, yes, I think, and universal cessation of violence will never happen, either, so let's forget about that, too. And, yes, we may never end slavery, so we might as well buy a few. One, at least, would be handy.

Waiting For a Human

There is no crisis in education in America. American schools do precisely what they're designed to do and for the most part they do it well. After all, they've given us everything we have. You may not like everything it gives us, but it gives us what it's designed to give us: our country, our cities, our people. Who could ask for anything more?

Apparently lots of people. I just saw an entire documentary, Waiting For Superman (2010), that pressed that very point: we could do a lot better, they thought. The question is better for what?

The gist of the documentary was twofold: how to pull children out of poverty and how to supply America with scientists. They focused on a half-dozen or so inner city kids all trying to get out of their holes by trying to get into charter schools designed to do just that, extract kids from poverty. At the same time, they stressed how we are “falling behind” other countries in the production of scientists; they pointed out how poorly Americans do at math, which they considered a significant indicator. They pointed out how America isn't able to fill all of its scientific needs and has to import talent from abroad. The hope is to create schools that will pull children out of poverty by qualifying them as scientists in particular. Noble goals, if left unexamined.

This is somewhat in contrast to recently viewed Dan Rather report which showed convincingly that in most fields we're overstuffed with scientists who are unable to find jobs. They find themselves competing with hundreds of people for single positions and end up working at McDonald’s where the ticket out is a higher education. Tell that to the PhD. It turns out the scientific “needs” are concentrated in specific fields, such as petroleum engineering or computer development. Turns out we need lots of those, just not so many microbiologists.

Waiting For Superman never asked why we need those scientists other than competitively; we need more scientists (read: certain engineers) to sell toothbrushes to China (their example). We need those engineers for America to be competitive globally.

If we're to believe the movie, the theory is to extract gifted students out of inner city schools and find them good paying jobs in the marketplace. Sounds great.

But forget about the inner cities; they'll have to fend for themselves. Because right now the schools are doing an excellent job of preparing kids for life in the inner city. The documentary talked at length about inner city high schools being “dropout factories,” as if that were a bad thing. One principal told of how two-thirds of his students never graduated. He thought that was terrible. He never made the connection that two-thirds of the adults in his community didn't have jobs, that the school was educating students for the reality of their communities. To be sure, the schools aren't producing employees for Microsoft (Bill Gates is featured prominently in the film), but they're producing perfect candidates for life on the streets. The schools are teaching them that the larger society doesn't care about them. The school are teaching them that there's no point in learning, that there's no place to go. They do an excellent job of that.

The movie never saw the irony of how the parents of the kids they focused on were themselves un- or underemployed, how the schools are training their children to be just like them. And why not, that's the real world, that's the world those kids will inhabit: the world where no one cares, the underworld. Why should we expect the world inside their classroom to be different from the one outside their classroom?

And those jobs that are supposed to lift a select few from the cesspool and send them to the suburbs and nice lawns? Who needs all those engineers? Microsoft, yes! Because the communities sure don't. Your average neighborhood doesn't need a lot of software scribblers, God only knows there are enough of them in the world. Your average inner city dropout doesn't need a new app to tell him where to buy sneakers. Your average inner city dropout doesn't need a three-hundred dollar pair of sneakers.

We have enough toys. I realize toys sell well, but they only cover so much. As it is, we have enormous companies sucking up millions of engineers designing toys that gobble up almost all our resources; if it's not toys, it's weapons. We have very skewed priorities; but those priorities were on full display in Waiting For Superman: sell more toothbrushes.

Not one single person, not one single educator, not one single principal, not one single teacher, not one single politician, not one single professor, said, “We have to educate our students to transform their communities.” No one. Nobody suggested that the job of schools might be to educate people to take control of their lives, their neighborhoods, their communities. The entire movie was based on the premise of extracting people from their conditions, not changing the fundamental conditions which create the poverty in the first place. The only solution offered was how to make better employees for American companies, nobody gave one thought to the fate of the communities. And then they wonder why the schools don't produce so many engineers. Engineers for what? You don't need to be an engineer to sell dope or your body.

Schools reflect their communities. The dropout factories the movie highlighted weren’t randomly distributed throughout the country. Every single one was located in a high poverty district, they were the worst of the worst. Yet, the discussion with everyone revolved around improving the schools’ education packages and graduating higher rates of kids. For jobs that don’t exist. Or jobs in a few select fields dealing largely with consumer goods. But not in their neighborhoods, to be sure. There was a fantasy expressed that schools could become beacons of hope delivering kids from poverty, that they could be magic carpets that would whisk students away to new lives. Leaving, of course, the old war-torn neighborhoods behind to fester as they always had; never understanding that the pocket schools they focused on could never be anything but a band-aid for a lucky few, while the regular old dropout factories were going to continue as they always had, too. The schools will always reflect their communities; they have no choice. If you want the schools to be better, you have to improve the communities, the entire communities.

We could teach our children how to take control, we could. We don't want to. We could transform those neighborhoods. We don't want to. We want things just the way they are. We like it this way. Why, did you know that anyone in America can grow up to be President? It's true, look at Barack Obama. Anyone can pull themselves up by their bootstraps and make billions of dollars, look at Bill Gates. Okay, bad example, but others have risen from poverty to be captains of industry, surely there have been. That's what we train for, that's what we tell our children everyday: it's all about me. I can be king. We teach that over and over again; and many, if not most, of us believe it. We believe it so much that we take our failures personally, never realizing that only one person can be king at a time. “Oh, that's okay, just take some more antidepressants and go to work.”

Teaching people to take control of their lives? It's dangerous. People may decide they don't like rampant capitalism. People might decide they want to divide things up differently. People might decide that it's not good policy to allot resources by luck. You can just never tell what people will do given the wherewithal. Better to not give them the wherewithal.

It's not the teachers; it's not the facilities. Making them better won't address the fundamental problem: inequality. Not inequality in education, inequality in life. “Maybe you should just take some more antidepressants.”

The movie was very earnest. Bill Gates was very earnest. I'm sure he has every good intention of making better engineers and helping as many people get out of poverty as possible. I'm sure he'd like to see schools produce students closer to the Finnish model. I'm also pretty sure he doesn't realize that it's not the structure of Finnish schools which make them a success, it's the structure of Finnish society which designs the schools to be a success. The basic message of Finnish schools is, if you take control of your life, we'll provide the resources you'll need. After that, they let you decide. Contrary to our volumes of directives, their school policy is contained on a single sheet of paper. Taciturn bunch. Their system is based on trust; ours is based on “no child left behind.” Talk about different approaches.

In the end, watching the movie I was just sad. Sad, of course, for the people caught up in the grind of poverty and the lives of hopeless desperation; but sad, as well, that, despite the earnestness of the participants, they had no clue. Bill Gates has no idea he is being nothing but a shill for the digital world. He is looking for recruits; he’s not looking to transform neighborhoods. The Black Panthers were looking to transform neighborhoods; look what happened to them.

Sad because of all the wasted effort and resources. Sad because there’s no will to end it. Sad because the only mantra we hear is “jobs”; and every high school student in America—forget about the inner city kids—knows that jobs are no sure thing anymore. Jobs will only be a solution for some; the rest will have to be on the dole or go into a life of crime. Hey, that’s what they do now, right? If we’re to follow the recommendations of Waiting For Superman, everything is hunky-dory. All we need are a few more mathematicians.

But those neighborhoods? Don’t go into them at night or the dropouts will get you. Boo!

P.S. I’ve avoided talking about another aspect of the movie: it drubs the unions and they spend a lot of time arguing about tenure and performance-based pay, etc. Those are side issues unrelated to the task of running community schools. The problem here isn’t that the kids aren’t doing well enough on standardized tests, the problem is that what they’re learning is of marginal value to them. The debate about teachers is a red herring diverting attention from the real problem: the neighborhoods.

Wednesday, September 25, 2013

Old Guys

Today's news brings research reports that A) warfare helped propelled the creation of large, complex societies; and B) that the human population explosion began before the advent of farming. That got me thinking about Göbekli Tepe, that pre-agricultural temple complex in Turkey. It alone necessitates the organization of large groups of hunter-gatherers for its construction. The archeologist in charge of the project put it this way: "First, temples; then, cites."

The warfare theory proponents didn't address religious issues, but, obviously, religion played a big part in uniting early peoples. It suggests, perhaps, an early marriage between warfare and religion; one which continues to this day.

Thursday, August 29, 2013

Drivin' That Train, High on Mary Jane

The states are in a tizzy these days trying to determine the proper amount of TCH one can have in one's system before one is considered impaired.

Perhaps the most interesting part of the discussion is that no one is questioning the basic assumption: does THC impair one’s driving abilities? The levels allowed are being determined without any evidence that there is a causal relationship between unsafe driving and marijuana consumption.

Actuarial tables and state driving statistics indicate that marijuana smokers are safer than average drivers. This could be the result of one of two alternatives: either people who tend to be safer drivers are more likely to smoke marijuana than those people who tend not so safely; or smoking marijuana tends to make one a safer driver. The most relevant existing piece of data informing that question is that states that have adopted a medical marijuana program have seen an average reduction of traffic fatalities of 11%. The implication being that an increased percentage of marijuana smokers on the road creates safer conditions.

Why, then, are the states rushing to find acceptable TCH levels above which one is considered impaired? And how do they determine that level?

The “why” is more complicated; the “how” is easier: they don’t. There has been no causal relationship demonstrated between marijuana consumption and impaired driving. The assumption of impairment is based on a number of factors: people with a limited experience with cannabis tend to equate a cannabis high with alcoholic inebriation, which, conversely, most of them have experienced, and they have a hard time understanding the differences; a large anti-marijuana prejudice still exists which inclines people to believe the worst about the effects of marijuana without having actual data to back those beliefs up (the drug war under a new guise); and most commonly, people conflate laboratory results with actual driving conditions, making the assumption that lab results directly translate to driving performance.

In other words, the laws against driving while stoned are all based on unproved assumptions that are, most likely, wrong. The basic assumption is that decreased reaction time leads to unsafe drivers; and that is the basic, untested assumption. As mentioned, the only real-time test we have of that situation is in states which have legalized medical marijuana, and in those state fatalities, at least, have been reduced.

There is, though, a parallel situation with age. Laboratory tests—not to mention experience—conclusively demonstrate that one’s reaction times slow considerably as one ages; yet, paradoxically, people become safer drivers as they age. Experience helps, yes, especially the experience to compensate for diminishing skills, something marijuana smokers seem to understand inherently. Perhaps the experience comes quickly for them.

I suppose it was too much to hope for that the entire drug war should be over with in one fell swoop.

Tuesday, August 20, 2013

Equal Rights, Yeah!

The other day I was behind a bumper-sticker which read: Equal rights for all species.

My first thought was, had I run across a hitherto undetected sect of Jains, or was it some New Ager from Eugene?

My second thought was, really? E. coli? Mosquitoes? Malaria virus? Equal rights? Is that code for “no rights”? Or do the creatures have to be a certain size to qualify?

Did they think this one through?

Monday, August 12, 2013

Acid Tongues and Tranquil Dreamers

The following is an open letter to Michael White, author of Acid Tongues and Tranquil Dreamers. An entertaining read but incredibly poorly edited. It's as if no one had any idea what commas are for.

“…The fact that, in general, only the best-adapted member of a species will survive to reproductive age.”pg. 118, Acid Tongues and Tranquil Dreamers, Michael White.

Dear Michael,

My son thoughtfully turned me onto your book and I’ve been thoroughly enjoying it. I do quibble about the style sheet used to inform your punctuation, but it’s a minor point.

A somewhat larger point is illustrated by the sentence reprinted above, the one about “best-adapted members,” which itself illustrates a common misunderstanding of Darwinian evolution. As written, the sentence implies that natural selection is done at the individual level, the same error the social Darwinists make (and, frankly, most people).

The reality is that individual survival has little or nothing to do with natural selection. In general, most members of a species survive until reproductive age as one mounts the evolutionary chain. Those species designed with massive infant-death rates—frogs, say—are designed with the scatter-shot effect in mind: those who survive do so because of sheer luck, the predator didn’t find them; one has enough offspring so that who gets eaten is immaterial. By the time one gets to, say, zebras or people, most offspring are expected to survive.

Natural selection is done at the genomic level. What it says is that, on average, individuals who possess mutation X have a better reproductive rate than those who don’t possess it. It’s concerned about the overall average, not the success of any particular individual. For example, an individual might have a mutation which would allow the possessors, on average, a longer life span; but that individual might have only one offspring before being dispatched by misfortune. Nonetheless, thanks to that one offspring, the mutation could be spread throughout the species, even though it gave no benefit to the individual in whom the mutation occurred.

The confusion appears to be a conflation of pecking order and natural selection. It’s fairly understandable that, observing natural pecking orders, one thinks that the selecting done to achieve that order is the same selecting that determines the direction of evolutionary change or dominance. They are unrelated, but, unfortunately, most people equate the two, as the quoted sentence illustrates.

Pecking orders evolved, not to insure that the best genes get together, but rather to maintain order within the species. Not only did pecking orders evolve, so did infidelity, which insures that, despite pecking orders, the gene pool will continue to be thoroughly mixed. It’s the mixing which is important, not the dominance.

Thanks for your time,

Johan Mathiesen
Portland, OR

Gay Wankers

I’d like to thank my gay friends. They finally won the right to be treated as equals. Oh sure, it’s not uniform or perfect, but it’s heading in the right direction. And I don’t know why it is, but most of my gay friends—and, good God, but there are a lot of you—are also dope smokers. I think it has something to do with Stonewall. In any event, now that the flood gates are open on gaydom, dope is funneling down the chute as well; and for that I thank you.

Eric Holder’s announcement that the Feds are going to take a more lenient approach to enforcing anti-drug laws is a baby step in the direction of a stampede. He’s in danger of being run over; fortunately, he’s not ahead of the crowd. He made his announcement less than two weeks after Uruguay legalized pot and Vicente Fox reiterated his call for Mexico to legalize all drugs. I heard exactly that argument put forth on a recent NPR show.

Because it’s not just marijuana, folks, it’s any drug you can think of. Making drugs illegal is stupid. Just plain stupid. It achieves nothing, destroys people and communities, and is frightfully expensive. People on methamphetamine are fucked-up enough already without having to steal to get their fix. I’ve no doubt that most people on meth need help, but jail is anything but help. If we’re going to spend that kind of money, we should demand bang for our buck.

But the tide has turned, and it’s all thanks to those gay wankers smoking out back. It’s about time they came out; who says they can have all the fun?

Saturday, August 10, 2013

Dept. of Further Amplification: Grazing

Awhile back (June 7, 2013) in a post entitled “Man the Grazer,” I discussed findings about hominin dietary change 3.5 mya. I critiqued an article which said:

“The most significant findings indicate that human ancestors expanded their menu 3.5 million years ago, adding tropical grasses and sedges characteristic of a savannah-like environment to the typical ape-like diet of leaves and fruits from trees and shrubs of a forest environment.”

It should be pointed out that the author of the article, not the author of the study, said that. The study’s author, Thure Edward Cerling of the University of Utah, said:

“‘We don't know exactly what they ate. We don't know if they were pure herbivores or carnivores, if they were eating fish [which leave a tooth signal that looks like grass-eating], if they were eating insects, or if they were eating mixes of all of these.’”

The article’s author went on to clarify:

“He says this because the isotope method used in the new studies cannot distinguish what parts of grasses and sedges human ancestors ate – leaves, stems, seeds and/or underground elements of the plants such as roots or rhizomes. The method also cannot determine when human ancestors began getting much of their grass indirectly by eating grass-eating insects or meat from grazing animals.”

My critique centered on the probability of hominins eating grasses directly versus indirectly. I was concerned about the statement that we had added grasses and sedges to our diet.

Well, maybe Cerling didn’t say that; maybe it was the writer’s interpretation. I couldn’t find the research that was being discussed; but I did find other Cerling work, although it only discussed diets of large herbivores, not primates.

Better yet, though, I wrote him an email with my concerns and he wrote a quick reply. He wrote: “In our paper(s) we decided to present the data and make no interpretations as to the relative contributions of different food sources,” which is quite different from what the media presented. Although not finding his original paper makes it impossible to fairly compare, nonetheless, I’ll give him the benefit of the doubt.

If for no other reason than his replies made me think even more that he is on the right track. He wrote: “…Early hominins (6 to 4 Ma) had a C3-based diet (and were likely in the forest, even if a narrow forest), by 3.5 Ma they clearly were obtaining non-forest resources.” “Obtaining non-forest resources” is light-years away from “adding tropical grasses and sedges characteristic of a savannah-like environment to the typical ape-like diet of leaves and fruits”; much less, “we… make no interpretations as to the relative contributions of different food sources.”

Cerling expanded that concept even more significantly: “The earliest hominins, whose diet was purely C3-based, could have lived only in the forest (including a riparian forest).” Except for the interpretation of what foods we were eating back then, the data fit perfectly with my pet theories, thank you very much; and they are contrary to conventional wisdom. While it’s not even mentioned, much less stressed in the article or, as far as I can tell, the research, Cerling’s data completely negate the standard line that we left the trees and stood up because the forest was disappearing underneath us. Cerling fairly conclusively demonstrates that, while we stood up 6.5 mya, we didn’t leave the forest cover until 3,5 mya; which fits perfectly with the contention that it was food source opportunity which drew us out of the trees and not their disappearance. It turns out, we didn’t even leave the forest for three million years.

Cerling’s evidence does not determine what humans ate way back when, it only determines where the food came from. The important information here is the food source, not the food type; where the food comes from, not what the food is.

The truth is, it’s hard to change one’s eco-niche. The mere disappearance of an eco-niche is not enough to cause a change in a species’ niche. Usually, the dependent species will shrink with the disappearing niche or simply disappear. As the forest shrank, so did our niche; and, it turns out, we didn’t adapt out of our niche but rather shrank with it and didn’t emerge for another three million years. And three million years is enough time for our species to get thoroughly accustomed to hunting to where we could start venturing out onto the savannah where the bigger herds lived. Makes perfect sense. That scenario is simple and direct: hunters following the game. The competing theory of our continually changing our diet, accepting and then abandoning food sources as we moved through different environments, is complex to the point of unlikelihood. If we stood up to go hunting, we would have done it right where we were, and it would have taken us time to invade new territories, but at least we’d have a way to eat once we got there. If we were constantly having to find new ways to eat, it would have taken us a lot longer, I would imagine. And why would we invade a new eco-niche if there wasn’t already there a food resource we were accustomed to using? In fact, how could we? Could we have metabolized the food available on the savannah well enough to support our needs? Wouldn’t we have had to spend all day searching for food just like the other grass-eaters? Wouldn’t we have been at great danger far out on the velde eating grasses, if we couldn’t escape by running and had no weapons with which to defend ourselves? On the other hand, if we were out in the open hunting, we would have been a match for anything. Lions even. Which scenario is more likely?

It was one thing for us to let meat-eating take such a prominent place in our diet, but we were already meat-eaters before we split from the chimps; all we did was shift our emphasis; we didn’t adopt whole new food sources; but it would be an completely different matter for us to have adopted entirely new food sources—in this case grasses and sedges—only to abandon them later on. That’s asking an awful lot; and apparently that theory is promulgated by people who have a bias towards thinking humans started out as vegetarians. It is a religious argument, not a scientific one. Many people want humans to be natural vegetarians so they interpret data accordingly. Unfortunately, they make big mistakes. Like interpreting the appearance of non-forest foods in our diet to mean that we took up eating grass. That’s a stretch. It’s much more likely that we were simply continuing our hunting ways and following the game into the open.

The bottom line is that data bolster the argument that we left the trees to go hunting. It says that, when we climbed down from the trees, we didn’t leave the forest, that we kept eating food grown in the forest. It says that we didn’t starting eating food grown in the open until 3.5 mya. I can live with that.

Yet even diet isn’t the entire picture. The question of where one lives is not cut and dried. “Living” usually comprises two locations: where one feeds and where one sleeps. For predators bringing food back to the nest, where one raises one’s young is the same as where one sleeps. Predator young are raised away from the hunting grounds in a protected location. When hominins began harvesting the open grasslands, they could well have kept their camps/lairs in the forest. Most likely, they were close to water; there would be no particular need to leave the water’s edge just because one had taken up hunting the savannahs.

I’m not alone; hence, Cerling’s suggestion that humans may have evolved in a “riparian forest,” which accords with the contention that, when we left the trees we 1) began to live in semi-permanent camps as do all predators; and 2) those camps were likely down by the river.

That’s what I like about the theory: it’s clean, simple, direct, and the data all fit. Hoo-boy, what more could one ask?

Friday, August 2, 2013

Prisoner's Dilemma

In the model, each person is offered a deal for freedom if they inform on the other, putting their opponent in jail for six months. However, this scenario will only be played out if the opponent chooses not to inform.

If both "prisoners" choose to inform (defection) they will both get three months in prison, but if they both stay silent (co-operation) they will both only get a jail term of one month.

Thank you, Prof. Andrew Coleman (Leicester U.). Coleman ran a study on the prisoner’s dilemma, the classic case of cooperation or “defection,” as the author calls it. Most research has concluded that the optimal behavior for the individual is to act selfishly and defect rather than cooperate. Coleman discovered that, in the long run, selfish behavior would lead to extinction, which is why cooperation is the norm in the animal world, a reality I’ve been arguing on behalf of for many, many years based solely on observation and logical thinking. Regular readers (hi, Dave) know my mantra: the chicken is the egg’s way of reproducing itself. That was one of the great intellectual epiphanies of my life. It is entirely contrary to the American gestalt which elevates individualism and competition.

Prof. Coleman stated it somewhat lengthier: "It's not individuals that have to survive, it’s genes, and genes just use individual organisms - animals or humans - as vehicles to propagate themselves"; but the message is the same. One has to understand that to understand evolution. Coleman came to that conclusion using powerful computer modeling, which is nice, but it doesn’t take precedence over simply thinking things through. Whoever came up with the line about the chicken and the egg undoubtedly did it by thinking about the necessity of how genes have to work. Once we knew about genes, the era of individualism was dead.

Except in America.

Couple this find with the finding that our brain makes it’s decisions on a course of action (move that muscle, think that thought) before we are aware of what those decisions are—i.e. eliminating the possibility of free will—and one starts to understand how our existence as individuals is illusory: we are merely expressions of the species. We are how the species lives and propagates. We do nothing; the species does it all, up to and including our thinking. One can understand that but one cannot affect it.

Which is why giving people credit or castigating them for their behavior is absurd. No one is responsible for what they do; not you, not me, not your mother. One has to question the entire system of rewards and punishment: who is one rewarding or punishing? Why? What does one expect to gain by rewarding or punishing? How do we reward and punish?

The current system is to accumulate all you can and defend it with arms. That’s the selfish scenario; that’s the American gestalt. It’s based on the theory that the individual is the most important; and if only one can survive, the species or the individual, it’s more important that the individual survive, even if it means extinction of the species. It doesn’t take a whole lot of thinking to see that such a scenario would be disastrous for the species: extinction’s about as bad as it gets. It’s called capitalism.

Okay, we’ve seen what “defection” will do.  Anyone care for cooperation? Or is that un-American?

Thursday, August 1, 2013

Living Wage

There is a debate going on in this country about “living wages.” The wages aren’t all that living, but they would be better than what we have now, raising the national base rate from seven-something-per-hour to fifteen-per-hour.

The primary argument against it is that it potentially could cost jobs. The reality is that it has minimal effect and would probably have no effect if were uniformly applied nationally. Everybody’s boat is equally raised and the same number of jobs are required to keep the economy running, so the net effect is no job loss. What there is instead, is inflation as prices are raised so that the net income percentage of the owners remains the same. The long run solution, of course, is not to raise the minimum but lower the maximum. Oh, that’s right, we don’t have a cap. The American economic theory is that, if one doesn’t have the sky as the limit, there’s no incentive to create anything.

The conservative solution to low wages that I’ve heard several times—most recently from David Newmark on BBC radio—is to increase training.

Now, it doesn’t bother me if such a nonsensical solution is put forward, but it does bother me that the interviewer didn’t question it. That solution doesn’t address the issue being discussed, at all. By Mr. Newmark’s logic, if, say, a fry cook at McDonald’s were to go out and take a couple courses in computer programing, McDonald’s would automatically raise his or her wages. That’s all Newmark suggested was necessary; he said that fry cooks at McDonald’s don’t receive enough money because they aren’t highly trained; he didn’t suggest that the training should be in being a great fry cook or in being anything else, simply that more training was needed in order for wages to be raised.

Without putting words in Newmark’s mouth, I’m guessing that what he meant was, if this fry cook at Mickey-D’s, got trained as, say, a computer programer, he or she might be able to qualify for a better job. Which could well be true, but that’s not what the issue is; the issue is the wages of fry cooks at McDonald’s, not how to get people out of being fry cooks at McDonald’s. The argument is that fry cooks at McDonald’s should receive a living wage just like anyone else.

Well, Jesus, what a radical idea is that? What’s more important: an apartment in Dubai or that fry cook having enough money to feed his or her baby? Dubai, all the way, baby.

But if you’re interviewing one of these guys, nail him to the cross, okay? The issue is not pitiful wages, the issue is income disparity.

Thanks.

Monday, July 29, 2013

“I Like Killing Flies”

A bioethicist for the Fish and Wildlife Service made the argument recently that humans have a duty to do what they can to save the species threatened by our destruction of their environment. I thought that a pretty bold statement for an ethicist who, unfortunately, didn’t elaborate on how he came to that conclusion. There are layers of unexamined assumptions in that categorical statement, not the least of which being that humans have any responsibility towards anything, much less another species. Where, pray tell, would these responsibilities come from and what would they be. Not to mention, how did he know about these responsibilities? Who made him the interpreter? I felt I was listening to George Bush: “I am the decider.”

The ethicist was talking about a plan to kill barred owls to make way for spotted owls. Thanks to habitat destruction, spotted owls have been losing ground, and they’re facing heavy pressure from barred owls which are increasing ground. The idea is to thin out a populous species to make room for a threatened species; threatened due to loss of habitat due to human activities.

At that level, we can all (I hope) agree that loss of habitat from human disturbance is probably the cause of the spotted owl’s decline. We can probably agree that the drastic decline in the number of species worldwide is caused by human destruction of habitats. What we’re going to have trouble with is agreeing that we should or have to do anything about it. The asteroid that killed off the dinosaurs, did it have a responsibility to the species it wiped out? Does a disease have a responsibility to its host?

I don’t think so.

There are no a priori responsibilities in the universe. There are no rights in the universe. It’s as simple as that. All rights and all responsibilities are created by people.

Rights and responsibilities are, essentially, aesthetic decisions. Ethical decisions, moral decisions are aesthetic decisions. The Golden Rule is an aesthetic decision: people don’t like getting punched.

One can make the argument that every member of every species has an inherent responsibility to perpetuate its kind. Presumably, “perpetuate” means, not only propagate, but provide for, as well. We all have to take care of our own kind; it’s the only obvious task we’re given. Beyond that, everything’s open to possibilities.

Which ultimately means that, unless it impacts us, that fate of the spotted owl is its own problem. From the bioethicist’s standpoint, saving the spotted owl because we’ve destroyed its habitat is purely aesthetic, but that doesn’t mean it’s valueless or unreasonable. We can all honor diversity of all kinds, owl species included, and do what we can to maintain as many as we can for our own enjoyment. Most of us can appreciate parks and wilderness set-asides for those very reasons. But killing one species for the supposed benefit of another has to be carefully weighed. Most likely, the barred owls are moving in on underutilized territory, and that, even if there were no barred owls, the loss of habitat would still doom the spotted owl. And, even should killing barred owls stabilize the spotted owl population, one would assume that killing would have to be regularly done in order to protect the spotted owls. It’s unlikely that a one-time operation would solve the problem.

It’s problematical to try and maintain a single species whose habitat has disappeared; even if it can be done, is there justification for the cost and effort? Could not the same cost and effort be put towards maintaining species capable of holding their own? In other words, how big should the zoo be?

There is no question that we are the asteroid, but there’s more question about whether or not we can reverse the asteroid. That seems particularly hopeless. Stewart Brand’s current fantasy of reviving the passenger pigeon is an egregious example of that thinking; even if a species can be somehow artificially maintained, without its natural habitat, it will never be a viable creature. It’s an odd ego-trip that makes one want to play god. (Surely there’s a line here about old habitats never die, they just get paved away.)

A large portion of the human race is living in denial. And that’s not including the climate-change doubters. I repeat: we are the asteroid; the world will never be the same again. Ever. We are a naturally occurring disaster; for all we know, it’s already happened on billions of other planets. That’s not our problem; our problem is denying that it’s happening here.

You want a good example?

Invasive species.

How many times have you heard warnings and horror stories about invasive species? A zillion, right? Two zillion, whatever. And we all know what an invasive species is, right? One that’s not natural to the area.

“Not natural,” what’s that?

But let’s step further back a minute: “invasive species.” What’s that?

If I’ve got the story right, every species begins with a mutation, some change which makes a new, slightly different, species which makes use of new ground somehow. If it succeeds, it has offspring who have offspring who have offspring who cover as much of the world as they can. There they go, invading wherever they can. It would appear that any successful species, by definition, invades wherever it can. Like us. Here we are, the most successfully invasive species in the history of the planet, and we’re complaining about those species hanging onto our coattails. Or, at least, the ones we don’t like.

That’s the denial part. We have an illusion that everything should stay the same after we arrive except that we are there. Except for the roads we build. Except for the farms we build. Except for the cities we build. Except for the factories we build. Except…

Okay, about those invasive species; which ones were they again? Oh yes, the ones we don’t like. We don’t seem to complain so much about acres of invasive soybeans, do we? It’s the plants and animals that come along without asking that piss us off. It’s okay if we shag them along with us, but if they hitch a ride, they’re a no-no. Everyone knows that.

Small aside: You know those signature golden hills of summer in California? Everything that’s golden, all that splendid grass, is “non-native,” to California; and by that I mean it arrived after the arrival of the European-Americans. That’s our dividing line; if was here before the white man, it’s native, regardless of how arbitrary a rule that is.

And that’s what we mean by “invasive”: we know when it arrived and we don’t like it. If it showed up before us, it’s fine. (We feel the same way about people, too, but that’s another story.)

So, Mr. Ethicist, you can do what you can to save the spotted owl from flickering out; but I’d rather you did it without riding in on the high horse of moral duty; a duty that is questionable ethically and practically. I’d rather you said, “I like the spotted owl and want to do what I can to save it. And would you mind helping me pay for it?” I might even be willing to chip in some bucks. But trying to guilt-trip us into saving the spotted owl might backfire in the long run.

Just saying.

There is an argument, though, that goes: even though, as Earth-bound animals go, we’re quite clever, we have only a rudimentary grasp of how the web of life is stitched together, and we have no idea which species might, in the future, prove our salvation; hence, it behooves us, for our own future options, to keep as many of them open as possible; i.e. keep as great a diversity of life forms alive as possible. Now, that’s an entirely selfish reason to maintain diversity, but it’s reasonable and practical. From that viewpoint alone, it argues to keep the spotted owl with us. But, as we’re going to have to save millions of species, we have to do a cost analysis of which are more valuable to save and where should we spend our money. Simply because we “owe it to them” is not enough. We owe it to everybody; the question is, who can we reasonably expect to save and of what benefit do we think they might have for us?

Mind you, I’m not saying don’t blow the barred owls away. Who knows, maybe we have a glut of them. I like eating; I understand the need to kill things. But I’d like to think I’m getting a bang for my buckshot. I want to be convinced that this is a good expenditure of public funds, that it will safeguard our future. Right now it appears sisyphean.

Do other species have rights? No; but that doesn’t mean we have to be wanton or cruel. Nothing wrong with being nice guys; we should try it some time. I’m all for saving everything and having parks as big as all outdoors, but I’ve got a limited amount of bucks. Where should I spend them?

Thursday, July 18, 2013

Tough call.

“Primitive human society ‘not driven by war’”
BBC 18 July 2013

Researchers from Abo Academy University in Finland say that violence in early human communities was driven by personal conflicts rather than large-scale battles.

They say their findings suggest that war is not an innate part of human nature, but rather a behaviour that we have adopted more recently.
The research team based their findings on isolated tribes from around the world that had been studied over the last century.

About those “isolated tribes”: if I read the sentences correctly, it would appear that the researchers are extending their findings of isolated modern tribes to prehistorical human group behavior. It would appear there are tons of assumptions there that haven’t been addressed, the most important of which being the assumption that the behavior of modern tribal societies mirrors that of past tribal societies. Without detailing exactly why one would make that assumption, the entire rest of their argument is suspicious. Why should we assume that all tribal societies throughout our time on the Earth have behaved similarly? What evidence do we have that that’s true?

Remaining tribal societies live in generally inhospitable places with minimal pressures from anyone wanting to take over their territory. I’m not sure those societies would behave the same as ones in the middle of resource-rich environments with many peoples coveting their land/territory. Furthermore, the remaining tribal societies are almost all in a permanent state of warfare with their neighbors as were the native societies of Europe and America and Asia. A reader of Peter Matthiessen’s Under the Mountain Wall, would get a very different picture of warfare among modern tribal societies than the authors present. It’s hardly personal feuds, as the authors suggest. One would have to examine their data carefully. But even if they found that most modern conflicts are limited personal conflicts, it doesn’t mean that larger-scale operations haven’t always operated in resource-rich areas. It’s a big jump that, perhaps, the authors cover in their paper but was missed in the reporting. One hopes.

The issue, it would seem, is one of scale. Small, local conflicts can be described as personal conflicts; but that may just be because hunting-gathering societies don’t tend to gather in such large organized masses as farming societies do, so the conflicts of farmers tend to be larger in scale than those of hunter-gatherers. Are the authors reserving the use of war to large-scale conflicts? That would seem defeating and inaccurate. In the end, it seems a semantic problem more than anything else.

Friday, July 12, 2013

Acid Tongues and Tranquil Dreamers


…The fact that, in general, only the best-adapted member of a species will survive to reproductive age.
        pg. 118, Acid Tongues and Tranquil Dreamers, Michael White

[Having no way to contact Mr. White, I thought I'd let this note free on the Internet. Go, little note, fly away.]

Dear Michael,

My son thoughtfully turned me onto your book and I’ve been thoroughly enjoying it. I do quibble about the style sheet used to inform your punctuation, but that’s a minor point.

A somewhat larger point is illustrated by the sentence reprinted above, the one about “best-adapted members,” which itself illustrates a common misunderstanding of Darwinian evolution. As written, the sentence implies that natural selection is done at the individual level, the same error the social Darwinists make (and, frankly, most people).

The reality is that individual survival has little or nothing to do with natural selection. In general, most members of a species survive until reproductive age as one mounts the evolutionary chain. Those species designed with massive infant-death rates—frogs, say—are designed with the scatter-shot effect in mind: those who survive do so because of sheer luck, the predator didn’t find them; one has enough offspring that who gets eaten is immaterial. By the time one gets to, say, zebras or people, most offspring are expected to survive.

Natural selection is done at the genomic level. What that says is that, on average, individuals who possess mutation X have a better reproductive rate than those who don’t possess it. It’s concerned about the overall average, not the success of any particular individual. For example, an individual might have a mutation which would allow the possessors, on average, a longer life span; but that individual might have only one offspring before being dispatched by misfortune. Nonetheless, thanks to that one offspring, the mutation could be spread throughout the species, even though it gave no benefit to the individual in whom the mutation occurred.

The confusion appears to be a conflation of pecking order and natural selection. It’s fairly understandable that, observing natural pecking orders, one thinks that the selecting done to achieve that order is the same selecting that determines the direction of evolutionary change or dominance. They are unrelated, but, unfortunately, most people equate the two, as the quoted sentence illustrates.

Pecking orders evolved, not to insure that the best genes get together, but rather to maintain order within the species. Not only did pecking orders evolve, so did infidelity, which insures that, despite pecking orders, the gene pool will continue to be thoroughly mixed. It’s the mixing which is important, not the dominance.

Thanks for your time,

Johan Mathiesen
Portland, OR

Wednesday, June 12, 2013

“Scientific evidence that you probably don’t have free will”
George Dvorsky, io9.com, 1/14/13

As the early results of scientific brain experiments are showing, our minds appear to be making decisions before we're actually aware of them — and at times by a significant degree.… 
They had no choice but to conclude that the unconscious mind was initiating a freely voluntary act — a wholly unexpected and counterintuitive observation.

One never knows how accurate reporting is. If this report is true, it is truly shocking, but not for the reasons the reporter thinks. The shocking part is that, according to the reporter, the scientists who noticed this phenomenon—our brains working before we know it—found it a “a wholly unexpected and counterintuitive observation.” Really? What were they expecting? And “counterintuitive”? Whose intuition are we talking about?

Oh, I get it, the researchers were religious; they believe in free will. No wonder. They must have thought that god gave us an extra mental power that he skipped giving the rest of the living things: free will. Gee, what is free will? No one, it will be noted, has ever presented any evidence for free will, much less a definition, although we’re now getting evidence that there isn’t any. Trust me, if you don’t believe in free will, it’s intuitive that researchers will confirm such.

Monday, June 10, 2013

Free Will Versus Instinct

Believing in free will is akin to believing in god, except that I can understand god, while free will eludes me. Where I get lost is at the beginning; I can’t see a difference between free will and any other kind of will. Will meaning “choice,” not, say, the “will to succeed.” But choice. How are there two kinds of choice? Either one has a choice or one doesn’t. If one has a choice, one has the option of making said choice. One is always equally free to choose either of two options. The consequences may be radically different, but one is always free to make the choice. There isn’t, though, room for but one kind of choice; if there’s another, I haven’t run across it yet.

The term “free choice,” though, implies that there are at least two kinds of choice, right? Free and not free. The implication is that there are fundamentally different kinds of choices available, one which one is compelled to make, and another which one is free to choose anyway he or she likes. We tend to think of this latter kind of choice as, if not uniquely human, then confined to the larger animals. Primates, maybe. Don’t know about dolphins. The implication is that this kind of choice appeared somewhere along the evolutionary line; that prior to the appearance of such kind of choice, there was only instinct, which, everyone agrees, is no choice at all. We pride ourselves on not being instinctual, on being able to think for ourselves, to make good and bad decisions.

We are, perhaps, a bit prideful.

The difficulty with this line of reasoning is that it’s unexplained from whence this new manner of thinking comes. And how it works differently from other manners of choice. How there can be two methods of making choices has not been approached by the philosophers, much less by the neuroscientists. It seems mixed up with the notion of consciousness and self-consciousness; the difference between them isn’t clear either.

Fundamentally, the simplest living organism possible has to be able to find food, recognize it as distinct from non-food, and decide to process it. At its most basic level, finding and recognizing food is akin to deciding to process it. Still, the organism has to know how to position its body in order to process the food it has found. It has to have a sense of where its body is and where it isn’t, and it has to have a sense of how to move its body to another place. Conscious or self-conscious? You can call it what you will; regardless, it has to be conscious of itself and its surroundings in order to survive. Every living thing has the same requirement. Every.

So, how does this little, primitive, one-celled critter go about processing the signals it receives from its environment; and how does it communicate within itself? Chemistry and electricity, right? Nothing but. Chemical signals and electric currents transmit all the information the cell needs. What’s more, when two cells decided to get together and become a multi-celled critter, they retained the same methods of communication: chemicals and electricity. In fact, no matter how many cells clump together to make no matter how big a critter—a redwood or a whale—the communication channels never change: chemistry, electricity.

It’s not that these were the best channels available to living things out of which to build communications platforms, it’s that they’re the only channels available. Gravity, for example, doesn’t work well as a communications devise. Yet messages can only be sent using the materials at hand. Chemicals make good triggers; electricity is fast.

But as we noted, once that one-celled critter found something to eat, its triggers said “eat,” and the processing process began. The choice to eat was automatic. But it was a choice; theoretically, the critter could eat or not eat.

How the critter made the choice to eat was because it operated with an algorithm (paradigm/software/criteria) that said when triggers X, Y, and Z fire, swallow that puppy.

As the critters (I call them “critters” to lump together all living things, animal or vegetable) get larger and competition appears and predators appear, the algorithms become more and more complex. They evolve. Yet, the mechanisms of communication between the various cells and the ruling algorithms remain the same: chemistry and electricity. Even when you get to people, the only way the various parts of the body communicate with each other is through chemical signals and electrical charges. There is, fortunately or unfortunately, nothing that Mother Nature can add to the mix.

Needless-to-say, when sex was invented, the algorithms became hopelessly complex. Understandably so, though, given the kinds and complexity of sense and response organs. Choice and decisions, necessarily, must be made virtually instantaneously, especially as mobility and predation increase. The algorithms have to be acted upon sans debate.

In the end, all living things possess the same three things: sensors, output terminals, and CPUs. They can read the environment; they can interact with the environment; and they have a pile of software to work out the details and coordinate things. The bacterium has these. You have these. The bacterium uses those three capacities to find and consume food; we do too. (We also use them to facilitate DNA mixing.) In both the bacterium and us, all the interior communication is done by the same chemical/electrical routes; nothing new has been added to the mix. The signals that make the dog’s mouth move when it barks or our mouths move when we speak, are made and processed the same way. Neither us nor the dog, of course, is aware of the millions of interactions required for every bark, every word. What could awareness of that even be like? (Zen masters and LSD aficionados?)

As you can imagine, this has profound implications as to where another form of choice could come from. So far, we can only envision a mechanical, albeit highly complex, method of functioning. Essentially, we have only instinct at this point in the discussion; there is no place for anything else. Even thought, no matter how complex and self-reflexive it seems, has, at this juncture, only a mechanistic explanation, an instinctual explanation. No other explanation—other than God—has been put forward. Where would thought come from if not from a complex function of our CPUs? How is my song any less instinct that that of the nightingale?

In the end, it appears we don’t even have a definition for “will,” much less “free will.” (We carry this problem with us when we think about computers; we worry about when they’re going to be able to spontaneously write and install their own software without wondering where the computer’s existing software would generate that idea within itself; we think that at some point “will” will arrive as if the computers will be touched by the finger of god and become alive; and that being alive will be a fundamental change for the computer. We speak of the “singularity” as if a magical transformation will appear at a point in the future when machines become “alive.” We have faith that we can create life by software, if not by electro-chemistry.)

Ah, people; we’re so cocksure.

Sunday, June 9, 2013

The Carbon Age

Mathias has lent me a book, The Carbon Age, by Eric Roston detailing the role of carbon in—well—the Universe. On page 48, Roston lists five things to remember about evolution, including: “only populations evolve; individuals adapt,” and “evolution has no goal.”

Ah ha! Someone tell that to the paleoanthropologists. It was a startling revelation to read those words as I’d begun to think I was crying in the wilderness. I’ve most often termed it that “DNA evolves not individuals,” but I believe we’re saying the same thing. And aimless evolution insures that one can’t evolve to escape danger. Thank you, Mr. Roston.

Mr. Roston didn’t use my favorite encapsulation that “evolution follows opportunity, not necessity,” but he well could have. In any event, it’s an unequivocal statement of the reason why people couldn’t have abandoned the trees because of the trees’ disappearance. That’s a goal: “We have to learn to stand up because the forest is going away,” (or its concomitant error, “Us fish should learn to walk on land to avoid predators”). As Roston points out, it couldn’t have happened. Those are side benefits to the effects of natural selection, not the intent. You can be quite sure that what drove us to the ground and fish to terra firma was food, pure and simple.

If I understand what I’m told, what distinguishes our ancestors from the pans is bipedality. We maybe didn’t give up our arboreal ways completely at first, but we right away began walking upright. For what we know, we didn’t have a period when we were knuckle-walking out on the velde, having abandoned the trees, but not yet taken to standing up straight. (How would we tell?) We never seemed to have went through a baboon period; we walked out of the forest upright.

That being the case, then abandoning the trees went hand-in-hand with walking upright. That fact uncouples bipedalism from the fate of the forest. If bipedalism first appeared in a savanna-like environment, it doesn’t mean that the environment was the cause of the bipedalism. Not to mention that it’s not agreed as to what kind of environment bipedalism first occurred in.

Ergo, I repeat, any theory of human evolution that doesn’t account for our letting go of the branches is no theory at all; and that “forgoing the forest because it was disappearing” is a circular argument. Fail.
carbon atom: examiner.com

On the Waterfront

Past Horizons, June 8, 2013

“Beachcombing for early humans in Africa”

In the middle of an African desert, with no water to be found for miles, scattered shells, fishing harpoons, fossilized plants and stone tools reveal signs of life from the water’s edge of another era.

The article goes on to talk in general about the state of early hominid archaeology in East Africa and about previously moister conditions, etc.; the idea being that, if you find old shorelines, you stand a better chance of finding old fossils and artifacts.

Sorry, but it’s time for another “Well, duh!” Where else would you expect to find evidence of early humans? Besides caves? (As if scads of early humans populated all the caves from Ethiopia to South Africa. How the hell many caves were there?)

It’s telling, though, that despite the nearly universal finding of fossils from shoreline conditions (barring caves, which are always located near water), early humans are invariably depicted as savannah-living creatures. Whoever wrote this article, for example, was able to calmly write about the human relationship to wetter conditions without ever once making the connection between the conditions and the sites where she was finding the fossils and artifacts.

Okay, I’m no scientist. I know nothing about foot bones or ear bones and I have no opinion about which hominins were capable of what. I rest all my scientific expertise on one statistics of sociology course a half-century ago. What better credentials than that, eh? But that statistics course said: if it looks like a duck, squawks like a duck, and swims like a duck, odds are it’s a duck. I believe that. It’s not proof that it’s a duck, but it has the likelihood of being a duck, fair enough?

So, if one consistently finds evidence of early humans in close proximity to what were, at the time, bodies of water, one can assume a high probability of a causal relationship between them. It’s not likely that the water came to the people.

Just saying…

Friday, June 7, 2013

Man the Grazer

Popular Archaeology
June 2013, Cover Stories, Daily News

"Diet Change After 3.5 Million Years Ago a Gamechanger for Human Ancestors, Say Scientists"


The most significant findings indicate that human ancestors expanded their menu 3.5 million years ago, adding tropical grasses and sedges characteristic of a savannah-like environment to the typical ape-like diet of leaves and fruits from trees and shrubs of a forest environment. This, suggests the scientists, may have set the stage for our modern diet of grains, grasses, and meat and dairy from grazing animals.

"We don't know exactly what they ate. We don't know if they were pure herbivores or carnivores, if they were eating fish [which leave a tooth signal that looks like grass-eating], if they were eating insects or if they were eating mixes of all of these."

He says this because the isotope method used in the new studies cannot distinguish what parts of grasses and sedges human ancestors ate – leaves, stems, seeds and/or underground elements of the plants such as roots or rhizomes. The method also cannot determine when human ancestors began getting much of their grass indirectly by eating grass-eating insects or meat from grazing animals.

May I leap in before this goes any further? Take that opening sentence; it says that early humans added “tropical grasses and sedges characteristic of a savannah-like environment to the typical ape-like diet of leaves and fruits from trees and shrubs of a forest environment.”

That seems pretty unambiguous. It implies we typically ate leaves and fruits before adding grasses and sedges. I didn’t know that we were leaf eaters prior to diverging from our fellow simians—I thought it was primarily, nuts; fruits; and small game, like insects—but I’m a little dubious about us moving over to include the grasses and sedges. To begin with, can one tell the difference between grasses and sedges in the diet of proto-humans, or do they both have the same signature? As they stand, sedges provide a very minimal addition to the human diet in any culture, so it’s hard to imagine that they were once very popular but then died out.

I’d also question the signature difference between what parts of the grass were eaten, a difference we can't yet discern. The way the sentence is written, it’s implied that grasses were used much like a cow would, the entire plant eaten, as if we suddenly had become a grazing animal, only to abandon the practice later on. That, too, doesn’t seem likely to me. It doesn’t seem likely that we began to use any but the seed heads from grasses; and even those, it’s easier to imagine that we started to consume grain after we’d domesticated herding animals and tended to have stockpiles of grain on hand that we might, in desperation, try eating in times of privation. Most importantly, it’s easier to imagine grain consumption after the advent of cooking, as it’s nutritionally much more accessible to us after it has been cooked.

All in all, it seems unlikely that humans ever were large consumers of sedges and grass blades any more than they are now. That doesn’t mean that their data are wrong, though. Instead, the answer lies in a succeeding paragraph, where it added, that their analytical “method… cannot determine when human ancestors began getting much of their grass indirectly by eating grass-eating insects or meat from grazing animals.”

Well, okay then. Is it likely that people were harvesting grass eaters 3.5 million years ago? You can bet your bottom dollar on that. It’s a hell of a lot more likely that we were hunting grazing animals 3.5 mya than we were doing the grazing ourselves. A hell of a lot more.

I’ve seen this error made before with precisely the same kind of evidence, evidence that our diet of grasses and sedges increased, but in that case, too, they couldn’t tell whether it came from direct or indirect consumption. For some reason, primarily related to our own obsession with fad diets and the heavy PC status of vegetarianism and veganism among the young creatives, etc., the nod is always given to a vegetarian interpretation of our probable early diet history, despite evidence to the contrary.

The profession of archaeology, also, poorly interprets early dietary data because they’ve yet to understand how early it was that hunting tools, including stones, were used by our ancestors, millions of years prior to the first flaked stone tool; nor do they recognize the strong probability that standing up to carry hunting tools and game was the only reason we stood up in the first place. Much of that error comes, of course, from the impossible assumption that it was a dwindling forest cover that drove us to bipedalism. As any reader of this blog is well aware, evolution only goes towards a goal, not away from a danger. If we stood up, it was because we were harvesting a new food source that required us to stand up; and I can guarantee you that a diet of grasses and sedges does not require one to stand up.

Stand up to hunt? Sure, the chimps do it all the time. They just don’t do it enough to make a habit of us, like we do. And if I were you, I wouldn’t give them any ideas.

The good news is that we now have evidence for hunting going back 3.5 million years. You can be sure it goes back twice that far, but this is a good leap in that direction.

Thursday, May 30, 2013

The Coming of the Maasai, Ah…

“It’s the equivalent of the North Slope Oil Deposits for lawyers,” said Jeremy Schatz, chief counsel for the Washington, D.C. law firm of Dewey, Cheatham, and Howe, “it's a new field of unlimited scope.”

He’s talking about a new case being tested in Britain of the "Maasai versus the World." The Maasai have decide they’re a brand and that they want full rights of protection of their brand under the law. Let’s say you had an African import shop in London called, Maasai Safari. They want A) a cut of the action; and B) veto rights over how anything with their name associated with it is used or represented. They want image control. There’s only a million of them, but they’re a feisty bunch. Tall, too. And they can jump. 

Talk about a can of worms. Saying yes to the Maasai would be saying yes to every indigenous group in the world; that’s a hell of a lot of groups: the Arapaho, the San, the Sami, the Aborigines, the Ojibway, the Montagnards, the Yoruk… This could take all night. Think of how many Indian reservations there are in this country, alone? And how many of those reservations represent multiple tribes? Next, how about the tribes of the Amazon? Papua New Guinea? Tribal peoples of the Himalayas?

There has been a fight going on between Jared Diamond and others versus the tribal peoples of the world over just how they can and should be presented to the rest of us. In all cases, it’s a question of who’s telling the story. The tribal people want to whitewash their story (maybe that’s a poor choice of terms). Think of how the American Indians want to control their image. Take the fight over school mascots. For no good reason, they put enough pressure on enough people with guilt complexes to get the nation to abandon such names as Braves or Warriors (if accompanied by an Indian profile). The Indians would have you believe they, before the arrival of the Europeans, were living an idyllic, peaceful life in harmony with nature; whereas nothing could be further from the truth.

But the lawyers have made a killing fighting that naming issue through the courts and legislatures. It’s been a bonanza for them. Now, every tribal group in the world wants the rights to how they’re depicted. They want to scrub the record clean. The slavery, the torture, the extermination of enemies, the cannibalism, they’d rather they not get mentioned. And they’re successful; those aspects of native cultures are ignored, buried, or denied.

Spread that fight out to every self-identifying group of people in the world, and you have a quagmire on your hands of epic proportions. What the fight is really about, in the long run, is the rights of identification. For every group, not just ethnic groups. Every religious faction, every club, every regional identity, every historical background will be open to litigation. Eventually, all identities, under this proposal, will become brands; and mentioning any identity opens one to libel suits. In fact, this protection would have to be extended to all brands. If the Maasai can control how their brand is presented, it brings into question all reporting on any brand. Under this proposal, if one were to write an article about BP, say, one would have to clear the piece with BC before publication. And if you’d happen to have mentioned Dutch Shell in that same article, they, too, would have to vet the piece. One can see that this would be the end of journalism. It would be the end of truth.

Which, frankly, is how these people would just as soon things went. If there’s one thing indigenous peoples and multinational corporations have in common, it’s an aversion to the light of day.

What a field day for the heat.

I can sympathize. I’m tired of us Vikings being depicted as ruthless, brutal warriors. Who says? No, we are gentle farming and trading folk who gave our names to innumerable tiny spots in England. What could be more peaceful? We have plenty of ruth.

Take down the Viking mascots, I say; take them down. Stop besmirching our good name. At least pay us some money, okay?

Look at it this way: how much money should Duluth Trading Company pay the City of Duluth for using its name? And should Duluth have final say on any copy the company produces? How about product line?

Who get to say when and how a cross should be displayed? Or a country’s flag?

Silly people.

Monday, May 27, 2013

Sins of the Parents

Did you know that your ancestors were slaves? Did you know that everyone in your ancestor’s village was wiped out except for the babies who were stolen by neighboring villagers? Did you know your ancestors were raped? Chopped into little pieces? Burned alive? Froze to death? Got lost? Were married off to someone they hated?

You know that your ancestors were executioners? Tribal leaders? Shamans? Merchants wealthy beyond dreams?

Innkeepers? Tin smiths? Acrobats? Farmers? Sailors? Night watchmen? Midwives? Professors of philosophy? Hookers and card sharks?

It’s lucky you made it. The most amazing thing is that, every single one of your ancestors, going back in an unbroken line, had children who, they themselves, had at least one living child. Isn’t that amazing? I mean, what are the odds? You mean, none of them was childless? Thousands of generations, every one having children?

Doesn’t seem likely, does it?

What’s worse, your ancestors were enslaved, robbed, raped, pillaged, and dismembered by ancestors of the person who lives next door to you. Right now. I’m not kidding.

Or, looking at it the other way around, your ancestors enslaved, robbed, pillaged, and dismembered your neighbor’s ancestors. It works either way.

Are the sins of the parents visited upon the children? Forever? How many generations have to pass before you’re not to blame for what your ancestors did? My immediate ancestors came here around the turn of the 20th century; am I responsible for slavery and mistreatment of the Indians just because I share a skin color with some of the people who perpetrated those atrocities? And, if not those atrocities, surely some other unnamed atrocity?

Must I do penance for all of them?

So, who’s to blame? Who should pay reparations to whom? How about if we all pass around a ten-dollar bill and call it square?

Saturday, May 25, 2013

Oh God

Oh God, tell me it isn’t so. This from the Christian Science Monitor:

“The broken, disrupted terrain offered benefits for hominins in terms of security and food, but it also proved a motivation to improve their locomotor skills by climbing, balancing, scrambling and moving swiftly over broken ground - types of movement encouraging a more upright gait," said University of York archaeologist and study co-author Isabelle Winder, in a press release.

This development would have conferred benefits that extend far beyond locomotion. Walking on two legs frees up the hands, allowing for the use of tools and, eventually, bigger brains. And the complex landscape could have made our ancestors smarter, says Dr. Winder.

The varied terrain may also have contributed to improved cognitive skills such as navigation and communication abilities, accounting for the continued evolution of our brains and social functions such as co-operation and team work.

That we stood upright because the terrain was rough. Oh sure. Oh double sure. Can’t think of a better reason. It’s for damn sure that two-legged creatures are much better at “climbing, balancing, scrambling, and moving swiftly over broken ground” than any four-legged stumblebum. Yes, a wolf is no match for us. We can run down mountain goats with ease. Is it any wonder that so many two-legged creatures evolved in rough, tumble-down terrain? You know, those other two-legged creatures like… like… like…
Oh well, you know who you are.

What Ms Winder ignores is her other statement: “Walking on two legs frees up the hands, allowing for the use of tools.” But, hey, Ms Winder, one doesn’t have to walk upright to use tools, only to carry them. And it makes no difference what kind of ground you’re carrying things over; you have to stand up. Carrying tools is like being in the water all the time; you have no choice but to stand up. We never had prehensile tails.

Wednesday, May 22, 2013

Fishing Expedition

If I’ve been quite, it’s for lack of desire to keep beating around the bush, but the coincidence of these two articles from Discovery.com were too good to pass up for a littoralist such as myself. Bolstering the argument that humans evolved on the shoreline, this from “Neanderthal Greek Paradise Found” (May 22, 2013) by Jennifer Viegas:

“The Neanderthals seemed to have a particular fondness for tortoise meat. The shells -- from shellfish too -- mostly were all recycled into tools, such as implements for scraping.…

“Dental wear suggests that the Neanderthals enjoyed a varied diet consisting of seafood, meat, and plants.”

and this from "Prehistoric Dog Lovers Liked Seafood, Jewelry, Spirituality" by the same author (May 22, 2013):"

“‘Dog burials appear to be more common in areas where diets were rich in aquatic foods because these same areas also appear to have had the densest human populations and the most cemeteries,’ lead author Robert Losey, a University of Alberta anthropologist, told Discovery News.

“‘If the practice of burying dogs was solely related to their importance in procuring terrestrial game, we would expect to see them in the Early Holocene (around 9,000 years ago), when human subsistence practices were focused on these animals,’ Losey continued. ‘Further, we would expect to see them in later periods in areas where fish were never really major components of the diet and deer were the primary focus, but they are rare or absent in these regions.’”

Thursday, March 21, 2013

And Another Thing

I don’t know that I’ve ever seen Bill O’Reilly ever talk to anyone. At them, yes, but to them, no. As far as I can tell, his schtick is to talk faster than the person who’s sitting across from him. His technique is to only air his viewpoint and leave his guest sitting there bemused. It works, I guess, he has lots of followers, ones that can’t think very fast themselves.

I saw a clip of O’Reilly sitting down with Richard Dawkins who manfully tried to provide half of the conversation but was never allowed to do so by the O-Master. Needless-to-say, O’Reilly neatly defended his faith and demolished Dawkins in the process. Or so he thought. That Dawkins couldn’t get a word in edgewise didn’t matter.

When Dawkins was able to assert that O’Reilly was an atheist as far as anyone else’s god was concerned, O’Reilly tossed it off with, “I just saw Jupiter and he didn’t look so well.” That was a riposte?

His defense, other than the classic “He’s proved his existence to me, and that’s all that matters,” of Christianity was the equally lame, “Christ was a real person.”

Ergo, he was a real god? How does that follow?

Not to mention that he’s fallen into the trap of thinking his god is real while all the others aren’t. Well, of course, that’s the definition of belief, not a defense of it. Bill, evidently, doesn’t know the difference. Nor, apparently, does he realize that all gods are myths, regardless of upon whom the myth is hung. Call me a god, Bill, but it won’t make me one.

Little Known Fact:

Claiming a personal experience with god as proof of his (usually a “his”) existence is the equivalent of saying that there’s proof of aliens because you’ve been taken up to one of their spacecrafts and been probed. Lucky you. But if you get 5000 people in a room claiming they’ve all had a personal experience with god and that they’ve been probed, the odds of their being correct don’t go up a whit. Infinity doesn’t allow such odds to get to zero, but they can get pretty damn close.

There Are No Atheists in Heaven

I can never watch a show about religion for any length of time before I’m driven to retort the absurdities that are presented. I have no idea what these folks (usually guys) final arguments are; because, if I come across an uncorrected assumption and it remains, not only uncorrected but a pillar of their argument, I can’t go on. Nothing that follows will be correct, so why bother watching. 

That’s my excuse.

Case in point: a recently watched YouTube video on why there logically can’t be any atheists. I never got to the guy’s final argument as per above. The part I did see had him making a syllogism. He began by drawing a circle and saying, “For sake of argument, let’s say that all the knowledge in the universe is contained within this circle.” Okay, we can do that. He continued: “Then, can we make a dot in the center of the circle and agree that it represents your portion of the universe’s knowledge?” Sure, we can do that.

But before we go further, I’d like to point out that “knowledge” is never defined. Is knowledge the same as information? Is knowledge restricted to living things? Does the galaxy possess knowledge in the sense we use it? How would that manifest itself? Offhand, I’d restrict “knowledge” to living entities, although I wouldn’t rule out endless amounts of living creatures in the universe.

He should have stopped there. He went one step further; he said, “All that other knowledge in the universe, the stuff you don’t know, that had to have been put there by someone.”

Uh-uh. False. That’s the uncorrected assumption: the assumption that knowledge was “put” anywhere. Now, not only don’t we have a definition of “knowledge,” we don’t have a definition of “put.” Bill Clinton would be happy. I won’t even get to the rest of the syllogism.

Ergo, whatever followed in his argument, if it was relying on his proof to be correct, could not be correct. Undefined assumptions are no-no’s in debates, sorry.

Coda:

I’d like to further point out that atheism is not the opposite of theism, as is commonly thought. Theism argues that there is a god. Atheism does not argue that there is no god; it argues that there is no evidence nor logical probability for a god.

Thank you, and good night.

Sunday, March 17, 2013

Echos of the Black Plague


England is excited these days about having uncovered a cemetery most likely from the time of the Black Plague. The assumption had been that, because of the enormous number of deaths, the bodies would have been thrown into communal pits; but that doesn’t appear to be the case, it looks like each body was separately dealt with in a respectful manner.

A story by the lyrical French author, Marcel Pagnol, tells a story of a suburb of Marseille, during a later plague, disguising themselves as a cart-load of dead bodies in order to pass the guards that had been posted to keep the residents of the plague city quarantined.

I find the Black Death as a convenient marker of European history, coming as it did in the middle of the fourteenth century, roughly 1348-1352. The Black Death was a pivotal point in European history because the survivors were instantly rich. Good land was plentiful and cheap. One hundred years later, 1450, the discovery of moveable type made books available to the general public, setting the stage for the Enlightenment. Put those dates together with 1066, the Norman invasion of England, and you’ve got everything you need to know about European history. Oh yeah, the Vikings were 800-1000 CE. That may not be so important if you’re Italian, but for us Scandinavians it was huge.

But every time the Black Plague is trotted out, commentators are sure to solemnly intone, “The Black Plague, the most devastating mass death in the history of the world…”

Fifty percent. That’s the usual estimate of the death rate for those four years, fifty percent. That’s bad, but compared to the fate of the American Indians—admittedly over a longer time period—who were felled by disease at rates of from 80-90-plus%, it was a piker.

Going further back, it is commonly thought that at one time the entire human population dropped to a few thousand people. What caused that, we don’t know, but it was a more serious time for our species than 1350.

Thursday, March 14, 2013

Dead Neanderthals

The demise of the Neanderthals makes for very constant speculation. Two recent theories suggest: 

1) Their eye sockets were too big.

2) They didn't eat rabbits.

Another recent finding also suggests they liked to walk long distances. Possibly to avoid eating rabbits.

Wednesday, February 27, 2013

Mud-Skippers

Australian mud-skippers; I saw them last night on a rerun of the BBC series, “Planet Earth.” They’re slimy little fish whose eyes recede totally into their body when they blink rather than simply drag a film of flesh over them, like we do. They roll around in the mud frequently to keep their skin moist, because it’s through their moist skin that they breath; dries out and they’re goners. As the program pointed out (narrated by Oprah Winfrey), this skipper behavior is probably pretty much how the first fish exited the ocean.

What they neglected to point out was the very different motivations those two kinds of fish had for starting the scramble out of the water. According to the BBC, the mud-skippers ventured onto land for food, the microorganisms found in the mudflats. According to paleontologists, the first fish departed for protection from the perils of the deep. Regular readers of this blog (I know you’re not out there) will be aware that I’ve long argued that the first fish left for food, as well. I’ve even argued that it couldn’t be the other way around. This BBC show doesn’t validate my claims, but it sure bolsters them.

Just saying…

But it does seem one of those instances where the paleontologists make their pronouncements without a clue as to how the world currently works. Bones, blood types, and DNA will only tell one so much; eventually one has to go out and smell the roses and make sure they’re still real. My argument is that a fish out of water is a hungry fish, not a scared fish.