Robert Sapolsky |
Just recently I heard an interview with a chronobiologist. I’d never heard the term, but as my spell-checker didn’t flag it when I wrote it, it’s obviously been around awhile. As soon as he said the word, it made intuitive sense. This was followed by your interesting piece on the effects of time on decision making. The results you outlined point to curious operations in the mind’s interior.
Lately, I’ve been engrossed with the question of free will. One biologist I ran across in my readings commented that he was waiting for “a molecular explanation of free will.” That may take some time. The longer I stare at the question, the more “free will” and “thinking” become conflated in my mind; I wonder if there’s truly thinking any more than there is truly free will? I have come to question the nature of consciousness. In the end I see consciousness as a real-time way for the vast collection of cells and organisms that any creature is to function ensemble, but that consciousness is no more the thinking part of the creature than is the computer monitor of the processor. Hal might be talking to you, but the real work is done by the CPU.
In that sense, all thinking is intuitive, whether it’s an on-the-spot reaction or a long contemplated action. If my own experience is any template, no matter how long I cogitate over something, when the answer does appear, I have no idea from where. I, obviously, have little control over the processes of rumbling through and filtering all the information that impinges on any decision. For one thing, there’s way too much information informing any decision for me to have a conscious grasp of it. Try as I might, I’ve never come up with an understanding of thinking that wasn’t reduced to a bunch of switches. Oh sure, the switches might be controlled by gauges, but, in the end, the message is either go or don’t go. I appreciate your field for trying to understand the switching processes, but it’s inevitably something one can only see from the outside; one cannot see one’s self think.
In the end I can’t figure out how any thought could arise without being run through some paradigm or algorithm which determines the outcome. If one has to choose between A and B, either there is a reason for the choice (conscious nor not) or the choice is random and, hence, without meaning. So, where does randomness come from? And “random” implies no control. So, if one either has control and makes a decision according to a paradigm or they have no control and it’s a random process, it appears that the only thing in control is the paradigm. I don’t see room for free will or thinking in that scenario. Maybe we call it “free will” because we have no idea from where it comes. How does one make a choice without applying criteria?
In any event, under that thinking, why length of time given to making a decision would alter the decision would appear to be a matter of response levels in the CPU. I.e., if a quick response is required, only certain essential parts of the CPU are brought into play; whereas if one has more time to contemplate, other algorithms and data sources can be accessed by the CPU. The shorter the required response time, the more instinctual the response would necessarily be.
Which brings us to children. There may be a Lord of the Flies aspect to children, but my experience is that children have instinctive morality. Perhaps they only have gut feeling, yet.
My untutored understanding—given, as you point out, that other animals possess empathy and altruism—is that those qualities are instinctual. I don’t see how they could be otherwise. If they were selected for, it means that they have value to the species, as the species is what controls trait survival, not the individual, right? Needless-to-say, it doesn’t seem much of a stretch to think that altruism, for example, would benefit the entire species, nor would it be unreasonable that it was selected for. In fact, wouldn’t it almost have to be selected for?
In that sense, it would make altruism/empathy fairly basic traits, at that; ones that would enable a species to better survive. They would, one assumes, be the traits that would rise to the surface as spontaneous reactions to situations. Selfishness, it would seem, would require some thought, it would require other algorithms have time to be expressed.
I guess that all I’m saying is that it makes sense that spontaneous reaction would be beneficial to the most people. It’s only natural.
Steven Pinker |
No comments:
Post a Comment