Wednesday, June 12, 2013

“Scientific evidence that you probably don’t have free will”
George Dvorsky,, 1/14/13

As the early results of scientific brain experiments are showing, our minds appear to be making decisions before we're actually aware of them — and at times by a significant degree.… 
They had no choice but to conclude that the unconscious mind was initiating a freely voluntary act — a wholly unexpected and counterintuitive observation.

One never knows how accurate reporting is. If this report is true, it is truly shocking, but not for the reasons the reporter thinks. The shocking part is that, according to the reporter, the scientists who noticed this phenomenon—our brains working before we know it—found it a “a wholly unexpected and counterintuitive observation.” Really? What were they expecting? And “counterintuitive”? Whose intuition are we talking about?

Oh, I get it, the researchers were religious; they believe in free will. No wonder. They must have thought that god gave us an extra mental power that he skipped giving the rest of the living things: free will. Gee, what is free will? No one, it will be noted, has ever presented any evidence for free will, much less a definition, although we’re now getting evidence that there isn’t any. Trust me, if you don’t believe in free will, it’s intuitive that researchers will confirm such.

Monday, June 10, 2013

Free Will Versus Instinct

Believing in free will is akin to believing in god, except that I can understand god, while free will eludes me. Where I get lost is at the beginning; I can’t see a difference between free will and any other kind of will. Will meaning “choice,” not, say, the “will to succeed.” But choice. How are there two kinds of choice? Either one has a choice or one doesn’t. If one has a choice, one has the option of making said choice. One is always equally free to choose either of two options. The consequences may be radically different, but one is always free to make the choice. There isn’t, though, room for but one kind of choice; if there’s another, I haven’t run across it yet.

The term “free choice,” though, implies that there are at least two kinds of choice, right? Free and not free. The implication is that there are fundamentally different kinds of choices available, one which one is compelled to make, and another which one is free to choose anyway he or she likes. We tend to think of this latter kind of choice as, if not uniquely human, then confined to the larger animals. Primates, maybe. Don’t know about dolphins. The implication is that this kind of choice appeared somewhere along the evolutionary line; that prior to the appearance of such kind of choice, there was only instinct, which, everyone agrees, is no choice at all. We pride ourselves on not being instinctual, on being able to think for ourselves, to make good and bad decisions.

We are, perhaps, a bit prideful.

The difficulty with this line of reasoning is that it’s unexplained from whence this new manner of thinking comes. And how it works differently from other manners of choice. How there can be two methods of making choices has not been approached by the philosophers, much less by the neuroscientists. It seems mixed up with the notion of consciousness and self-consciousness; the difference between them isn’t clear either.

Fundamentally, the simplest living organism possible has to be able to find food, recognize it as distinct from non-food, and decide to process it. At its most basic level, finding and recognizing food is akin to deciding to process it. Still, the organism has to know how to position its body in order to process the food it has found. It has to have a sense of where its body is and where it isn’t, and it has to have a sense of how to move its body to another place. Conscious or self-conscious? You can call it what you will; regardless, it has to be conscious of itself and its surroundings in order to survive. Every living thing has the same requirement. Every.

So, how does this little, primitive, one-celled critter go about processing the signals it receives from its environment; and how does it communicate within itself? Chemistry and electricity, right? Nothing but. Chemical signals and electric currents transmit all the information the cell needs. What’s more, when two cells decided to get together and become a multi-celled critter, they retained the same methods of communication: chemicals and electricity. In fact, no matter how many cells clump together to make no matter how big a critter—a redwood or a whale—the communication channels never change: chemistry, electricity.

It’s not that these were the best channels available to living things out of which to build communications platforms, it’s that they’re the only channels available. Gravity, for example, doesn’t work well as a communications devise. Yet messages can only be sent using the materials at hand. Chemicals make good triggers; electricity is fast.

But as we noted, once that one-celled critter found something to eat, its triggers said “eat,” and the processing process began. The choice to eat was automatic. But it was a choice; theoretically, the critter could eat or not eat.

How the critter made the choice to eat was because it operated with an algorithm (paradigm/software/criteria) that said when triggers X, Y, and Z fire, swallow that puppy.

As the critters (I call them “critters” to lump together all living things, animal or vegetable) get larger and competition appears and predators appear, the algorithms become more and more complex. They evolve. Yet, the mechanisms of communication between the various cells and the ruling algorithms remain the same: chemistry and electricity. Even when you get to people, the only way the various parts of the body communicate with each other is through chemical signals and electrical charges. There is, fortunately or unfortunately, nothing that Mother Nature can add to the mix.

Needless-to-say, when sex was invented, the algorithms became hopelessly complex. Understandably so, though, given the kinds and complexity of sense and response organs. Choice and decisions, necessarily, must be made virtually instantaneously, especially as mobility and predation increase. The algorithms have to be acted upon sans debate.

In the end, all living things possess the same three things: sensors, output terminals, and CPUs. They can read the environment; they can interact with the environment; and they have a pile of software to work out the details and coordinate things. The bacterium has these. You have these. The bacterium uses those three capacities to find and consume food; we do too. (We also use them to facilitate DNA mixing.) In both the bacterium and us, all the interior communication is done by the same chemical/electrical routes; nothing new has been added to the mix. The signals that make the dog’s mouth move when it barks or our mouths move when we speak, are made and processed the same way. Neither us nor the dog, of course, is aware of the millions of interactions required for every bark, every word. What could awareness of that even be like? (Zen masters and LSD aficionados?)

As you can imagine, this has profound implications as to where another form of choice could come from. So far, we can only envision a mechanical, albeit highly complex, method of functioning. Essentially, we have only instinct at this point in the discussion; there is no place for anything else. Even thought, no matter how complex and self-reflexive it seems, has, at this juncture, only a mechanistic explanation, an instinctual explanation. No other explanation—other than God—has been put forward. Where would thought come from if not from a complex function of our CPUs? How is my song any less instinct that that of the nightingale?

In the end, it appears we don’t even have a definition for “will,” much less “free will.” (We carry this problem with us when we think about computers; we worry about when they’re going to be able to spontaneously write and install their own software without wondering where the computer’s existing software would generate that idea within itself; we think that at some point “will” will arrive as if the computers will be touched by the finger of god and become alive; and that being alive will be a fundamental change for the computer. We speak of the “singularity” as if a magical transformation will appear at a point in the future when machines become “alive.” We have faith that we can create life by software, if not by electro-chemistry.)

Ah, people; we’re so cocksure.

Sunday, June 9, 2013

The Carbon Age

Mathias has lent me a book, The Carbon Age, by Eric Roston detailing the role of carbon in—well—the Universe. On page 48, Roston lists five things to remember about evolution, including: “only populations evolve; individuals adapt,” and “evolution has no goal.”

Ah ha! Someone tell that to the paleoanthropologists. It was a startling revelation to read those words as I’d begun to think I was crying in the wilderness. I’ve most often termed it that “DNA evolves not individuals,” but I believe we’re saying the same thing. And aimless evolution insures that one can’t evolve to escape danger. Thank you, Mr. Roston.

Mr. Roston didn’t use my favorite encapsulation that “evolution follows opportunity, not necessity,” but he well could have. In any event, it’s an unequivocal statement of the reason why people couldn’t have abandoned the trees because of the trees’ disappearance. That’s a goal: “We have to learn to stand up because the forest is going away,” (or its concomitant error, “Us fish should learn to walk on land to avoid predators”). As Roston points out, it couldn’t have happened. Those are side benefits to the effects of natural selection, not the intent. You can be quite sure that what drove us to the ground and fish to terra firma was food, pure and simple.

If I understand what I’m told, what distinguishes our ancestors from the pans is bipedality. We maybe didn’t give up our arboreal ways completely at first, but we right away began walking upright. For what we know, we didn’t have a period when we were knuckle-walking out on the velde, having abandoned the trees, but not yet taken to standing up straight. (How would we tell?) We never seemed to have went through a baboon period; we walked out of the forest upright.

That being the case, then abandoning the trees went hand-in-hand with walking upright. That fact uncouples bipedalism from the fate of the forest. If bipedalism first appeared in a savanna-like environment, it doesn’t mean that the environment was the cause of the bipedalism. Not to mention that it’s not agreed as to what kind of environment bipedalism first occurred in.

Ergo, I repeat, any theory of human evolution that doesn’t account for our letting go of the branches is no theory at all; and that “forgoing the forest because it was disappearing” is a circular argument. Fail.
carbon atom:

On the Waterfront

Past Horizons, June 8, 2013

“Beachcombing for early humans in Africa”

In the middle of an African desert, with no water to be found for miles, scattered shells, fishing harpoons, fossilized plants and stone tools reveal signs of life from the water’s edge of another era.

The article goes on to talk in general about the state of early hominid archaeology in East Africa and about previously moister conditions, etc.; the idea being that, if you find old shorelines, you stand a better chance of finding old fossils and artifacts.

Sorry, but it’s time for another “Well, duh!” Where else would you expect to find evidence of early humans? Besides caves? (As if scads of early humans populated all the caves from Ethiopia to South Africa. How the hell many caves were there?)

It’s telling, though, that despite the nearly universal finding of fossils from shoreline conditions (barring caves, which are always located near water), early humans are invariably depicted as savannah-living creatures. Whoever wrote this article, for example, was able to calmly write about the human relationship to wetter conditions without ever once making the connection between the conditions and the sites where she was finding the fossils and artifacts.

Okay, I’m no scientist. I know nothing about foot bones or ear bones and I have no opinion about which hominins were capable of what. I rest all my scientific expertise on one statistics of sociology course a half-century ago. What better credentials than that, eh? But that statistics course said: if it looks like a duck, squawks like a duck, and swims like a duck, odds are it’s a duck. I believe that. It’s not proof that it’s a duck, but it has the likelihood of being a duck, fair enough?

So, if one consistently finds evidence of early humans in close proximity to what were, at the time, bodies of water, one can assume a high probability of a causal relationship between them. It’s not likely that the water came to the people.

Just saying…

Friday, June 7, 2013

Man the Grazer

Popular Archaeology
June 2013, Cover Stories, Daily News

"Diet Change After 3.5 Million Years Ago a Gamechanger for Human Ancestors, Say Scientists"

The most significant findings indicate that human ancestors expanded their menu 3.5 million years ago, adding tropical grasses and sedges characteristic of a savannah-like environment to the typical ape-like diet of leaves and fruits from trees and shrubs of a forest environment. This, suggests the scientists, may have set the stage for our modern diet of grains, grasses, and meat and dairy from grazing animals.

"We don't know exactly what they ate. We don't know if they were pure herbivores or carnivores, if they were eating fish [which leave a tooth signal that looks like grass-eating], if they were eating insects or if they were eating mixes of all of these."

He says this because the isotope method used in the new studies cannot distinguish what parts of grasses and sedges human ancestors ate – leaves, stems, seeds and/or underground elements of the plants such as roots or rhizomes. The method also cannot determine when human ancestors began getting much of their grass indirectly by eating grass-eating insects or meat from grazing animals.

May I leap in before this goes any further? Take that opening sentence; it says that early humans added “tropical grasses and sedges characteristic of a savannah-like environment to the typical ape-like diet of leaves and fruits from trees and shrubs of a forest environment.”

That seems pretty unambiguous. It implies we typically ate leaves and fruits before adding grasses and sedges. I didn’t know that we were leaf eaters prior to diverging from our fellow simians—I thought it was primarily, nuts; fruits; and small game, like insects—but I’m a little dubious about us moving over to include the grasses and sedges. To begin with, can one tell the difference between grasses and sedges in the diet of proto-humans, or do they both have the same signature? As they stand, sedges provide a very minimal addition to the human diet in any culture, so it’s hard to imagine that they were once very popular but then died out.

I’d also question the signature difference between what parts of the grass were eaten, a difference we can't yet discern. The way the sentence is written, it’s implied that grasses were used much like a cow would, the entire plant eaten, as if we suddenly had become a grazing animal, only to abandon the practice later on. That, too, doesn’t seem likely to me. It doesn’t seem likely that we began to use any but the seed heads from grasses; and even those, it’s easier to imagine that we started to consume grain after we’d domesticated herding animals and tended to have stockpiles of grain on hand that we might, in desperation, try eating in times of privation. Most importantly, it’s easier to imagine grain consumption after the advent of cooking, as it’s nutritionally much more accessible to us after it has been cooked.

All in all, it seems unlikely that humans ever were large consumers of sedges and grass blades any more than they are now. That doesn’t mean that their data are wrong, though. Instead, the answer lies in a succeeding paragraph, where it added, that their analytical “method… cannot determine when human ancestors began getting much of their grass indirectly by eating grass-eating insects or meat from grazing animals.”

Well, okay then. Is it likely that people were harvesting grass eaters 3.5 million years ago? You can bet your bottom dollar on that. It’s a hell of a lot more likely that we were hunting grazing animals 3.5 mya than we were doing the grazing ourselves. A hell of a lot more.

I’ve seen this error made before with precisely the same kind of evidence, evidence that our diet of grasses and sedges increased, but in that case, too, they couldn’t tell whether it came from direct or indirect consumption. For some reason, primarily related to our own obsession with fad diets and the heavy PC status of vegetarianism and veganism among the young creatives, etc., the nod is always given to a vegetarian interpretation of our probable early diet history, despite evidence to the contrary.

The profession of archaeology, also, poorly interprets early dietary data because they’ve yet to understand how early it was that hunting tools, including stones, were used by our ancestors, millions of years prior to the first flaked stone tool; nor do they recognize the strong probability that standing up to carry hunting tools and game was the only reason we stood up in the first place. Much of that error comes, of course, from the impossible assumption that it was a dwindling forest cover that drove us to bipedalism. As any reader of this blog is well aware, evolution only goes towards a goal, not away from a danger. If we stood up, it was because we were harvesting a new food source that required us to stand up; and I can guarantee you that a diet of grasses and sedges does not require one to stand up.

Stand up to hunt? Sure, the chimps do it all the time. They just don’t do it enough to make a habit of us, like we do. And if I were you, I wouldn’t give them any ideas.

The good news is that we now have evidence for hunting going back 3.5 million years. You can be sure it goes back twice that far, but this is a good leap in that direction.