ConservationOfRationality

Reason and Logic are on a premium, perhaps more than many of us would prefer to think.

Human minds are not naturally reasonable and logical, no matter how coldly we speak, no matter how dispassionately we act, no matter how analytically we hold our the tip of our glasses in our mouth.

Thinking is a naturally creative act, full of PoeticReasoning. Despite rash division of humanity into “the reasonable” and “the emotional,” or “the logical” and “the illogical,” or whatever variant of the four humors or the twelve astrological signs of the Zodiac we would think with, just about everyone uses rational thought, and just about everyone uses chaotic thought, in their day to day works.

We can only be logical, rational, in tiny pockets. And those pockets can easily be surrounded by sheer nonsense: In which case we are reasoning over nonsense. To even strategize over what is worth being rational about is itself a herculean metaphysical task.

The myth of the super-rational mathematician

The epitome of the logician is the mathematician. The mathematician holds in his or her head the complexities and laws of mathematical worlds. But the mathematician is not wholely, or even mostly, logical. If mathematicians were intrinsicly reasonable and logical, then there would be no need for the elaborate mechanisms of proof, there would be no need for the elaborate social hierarchy, there would be no need for all these mechanisms of ensuring correctness. Mathematicians do their work the same way everybody else does: in fits of creativity and subsequent fits of logical destruction. They come up with wild ideas, test which of those would be logical given the domain, and then come up with a logical answer. We see the logical answer, and marvel at it. We do not see all the false ideas, wrong turns, imagined abstractions, that they checked out to get there.

Sometimes we see mathematicians make the correct idea right away. This isn’t because the person is intrinsicly more logical thinking. It’s because they have set up automations within their mind that allow them to quickly and intuitively follow good paths, quickly weed out and ignore bad paths. With these automations in their mind, they can navigate their spaces 100x faster, 1,000x faster, 1,000,000x faster, than those of us on the outside. Are they intrinsicly more logical thinkers? No; They are just people with automation under their belt.

They still struggle to produce rational results. Rationality is still on a premium.

Outline of Ideas

See also:

See Also

CategoryReasoning

Discussion

And I’m interested in exploring the types of cognitive theories suggested by your title. This bears directly on the great A.I. problem of how to build a mind which can hold conflicting beliefs. (although maybe some of that discussion would be better places on AiWiki)

Why is rationality at a premium? Some ideas:

  • It’s computationally expensive to be rational. First, you have to really think and question every conclusions to make sure it follows from your assumptions. Second, you have to “dot all the i’s and cross all the t’s” in your rational arguments/proofs, which takes a lot of work (but as can be seen clearly in math, it can’t be neglected; it does actually happen that something which you thought was just a small detail that you didn’t bother to completely specify is actually the Achillies’ heal that makes your conclusion wrong).
  • Rationality has to proceed from a a basis of assumptions or axioms; in some contexts these axioms take the form of “goals” (“don’t murder”; “maximize utility”; “maximize human life”). But in real life, person A often disagrees with something that person B thought was an axiom. Person B might claim they are being rational because, given their axioms, the rest of their argument is rational; but person A might feel that there is a “rational” argument that shows that B’s axioms are wrong.

Intuition, heuristics and shortcuts are a cheap approximation of rationality; easier to remember, easier to express, quicker to apply. Many “axioms” (“don’t murder”, etc.) can be approximations, that may not rationally hold for every case. So a fault of rationality would be to stick too closely to those axioms without questioning them.

I remember a discussion on a bulletin board, where a christian was asking atheists whether the execution of kids in Iran for homsexuality was wrong in any absolute sense. Some were answering no (“there’s no absolute morality, it’s all social constructs), some were answering yes (“I’d rather be right than consistant”). I see that as a case where the same axioms were held but some didn’t see it necessary to stick to them too closely. I guess what I mean by this is that injecting rationality at one level does not necessarily improve the “truth” of the result (though having things clearly spelled out is an advantage - it makes review and checking easier).

Also, I don’t think that a high level of rationality can be reached outside of science. In mathematics, even if an individual is faulty, a good enough “social mechanism” can have a pretty good rationality. But I don’t think it’s nearly that easy for anything in the “real world” - politics, morality, etc. There have been attempts to impose uniformity, and then claim it was rationality, though.

One of the nice things about mathematics is that it’s perpetually unchanging. It’s not like it’s trying to fool you, or has the capacity of trying to fool you. There is no “interpretation” to be done. But even in this very static domain, we have a hard time.

We don’t just map out mathematics- collecting the sum total of all things that could be mathematically true. There is a degree of mapping- we find “key nodal points” from which you can branch out and discover thigns you want to know. But there is also the “mathematics is what’s useful to us” angle on it. We focus on mathematics area X, and then later on mathematics area Y. We forget worlds of mathematics, (or hide them distant in a library.) It may be cheaper to recreate some branches, then to perform an exhaustive search of the library for the conclusion we want. When a particular type of math is very important to us, we proliferate its knowledge all over, and even hologram it on top of other fields- computer graphics taught in Linear Algebra. Linear Algebra has been presented in many different ways, depending on what was important in it’s day.

If we think about something like “figuring out what that person means,” or “trying to stay ahead of the competition,” there are a number of constrains on time which quickly lead to ambiguous situations.

We can talk about heuristics in two senses:

  • Heuristics that are generally dependable. Gases expand to fill their container. (ok.) Cars don’t suddenly leap backwards, if they’re going forwards, in normal circumstances. (ok.) All sorts of stuff.
  • Heuristics that are very ambiguous. A white flag means surrender. (hmm…) If a person blinks a lot, it means they are lying. (hm…)

When we get into time constraints, the matter of facts of daily life, we very quickly run into ambiguous territory. (This also makes the calculation of ethical behavior extremely difficult.)

I suspect this is because people have pretty much the same intelligence as other people. If there were one super-human with extraordinary computational power, it might take all of their energy not to accidentally manipulate the hell out of the people around him or her (likely “it.”) The super-intelligent actor cannot help but completely simulate the target of interaction. “You’re so easy to figure out, I have to forcefully pretend not to know your moves, in order to pretend to interact with you on an even keel, and not manipulate you too much.”

Two or more super-powerful minds, interacting among us, would likely result in a war-of-the-gods scenario, not uncommon in Greek PanTheism?.

In this case, their simulations of each other can keep each other in check. (Pity us poor monkeys on the periphery.)

Our heuristics can be hacked. Make a gas that, for whatever reason, travels in a straight line. Infiltrate trusted communications channels, subverting them beyond their original purpose. The cigarette companies do just this: They pay the big bucks, to figure out how to slip a thought around your mental filters.

Any sensors that we rely upon can be subverted. Secret sensors, indicators, can be discovered, rooted out.

Intelligent opposers will find where we are not being rational, and exploit the hell out of it.

I think a high level of rationality can be reached outside of science. Business people make a lot of money finding out useful ways of being rational; Some school kids recognize the rationality of popularity, and successfully perform very complicated lines of rational thinking in order to move up their social ladders.

People are smart everywhere.

Courts are another example of conserved rationality. They only have so much time to hear cases. They have to make decisions about the decisions that they will decide to hear, and they have to figure out how much time they will spend on the various decisions. Their time is on a premium, the time of lawyers and judges is expensive; Only the time of juries is on the cheap. We economize by making use of previously resolved cases. Similar to how in science, we economize by making use of previously determined scientific theories.

The play between authority, unique artifacts, reproducible artifacts, time, limited quantities, is very interesting.

  • authority: reputation, SocialHierarchy?, social relations
  • special artifacts: some test environments, some build or manufacturing environments, some weapons (that are valuable because they are unique,) prototypes, the root-name servers, the WikiPedia
  • time: energy, labour, work, always meting out the same for everyone

The play between these factors lead to these calculations, these logical structures. If A, then B. Many of them are throwaways. You calculate the scenario once, and the results are not reusable.

Or perhaps not: There are people who know how to make tons of money, wherever they are, whenever they are. The patterns repeat. Perhaps we just have all misdirected our energies, not figuring out how to play the important games. Or perhaps it’s the people who have trained themselves to aquire vast sums of money, who are playing the wrong games. By their discipline, they cannot spend the time to reconfigure their minds to be able to see what is valuable work to do. (For example, they cannot spend the time to see what valuable things can be done in a particular space of work.) Nor can they spend the time to figure out how to discriminate who is having good ideas, and who is having bad ideas. They can only invest in proven businesses and what not.

Or maybe not. I don’t know. But it is clear that there is a conservation of rationality. The time it takes to ThinkTalkAct is on a premium. That is why rationality is on a premium. That and the complexity and changing shape of the environment.

Perhaps the Buddhists are right. Perhaps the Buddhists are wrong. Perhaps the Buddhists have made essential errors on the path to Enlightenment. Perhaps the drunk is right. Perhaps the drunk is wrong. Perhaps the efforts of 3 civilizations are all wrong, because they made a miscalculation that the minds of the civilizations did not notice. Perhaps the whole thing is illusion, and that on a specific day, at a specific time, after so many yugas have passed, there will me spontaneous mass enlightenment, as to all things, regardless of effort or even mercies granted. There are things we cannot know, regardless efforts put into them.

At the base level, all things are faith. Faith and hope. Hope wherever there isn’t faith.

Faith, hope, and charity.

There are many things people do that happen too fast to rationally think about. Stuff like:

  • walking
  • running
  • skate-board stunts
  • having real-time conversations in a foreign language (or even one’s mother tongue)
  • speed-reading

People train, over and over, to wire up their brains so they can just do the right thing (or an adequate approximation to it) without thinking about it. Almost a knee-jerk reaction.

This leads to a lot of “advice” along the lines of “Don’t think about it, just do it”.

(On the other hand, sometimes when people do the same thing, over and over, it becomes all too easy to do it one more time, rather than put any effort into figuring out whether it’s really the best thing to do in this particular situation).

And yet … people occasionally succeed in doing things they have never done before. I find it even more amazing that occasionally, one person succeeds in doing something that has never been done before by anyone.

Saw this:

It basically says: Logic is a useful tool to show a problem in an argument. But we should not be persuaded by a display of logic. Unless the field is mathematics, the premesis of the logic are almost certainly arguable.

Even the totality of science and the utmost efforts of bayesian reasoning cannot tell us with certainty, even a probability, that the world we wake up in tomorrow will be the same one we slept to see the day before. People imagine that 5,000 years is a long time, enough to prove eternal stability, but eternity is a very, very, very long time. Recognizing the material world and interpreting it with logic, we find no basis for care about it. The systems evolved from the material world rely on it and cling to it, but it is incalcuably plausibly a dream clinging to a dream. There is no logical reason to believe it has any substantial reality to it.

We should recognize science as a system that somehow carefully collects and attaches rationality.

That said: We should recognize that scientists only see the portion of the totality that they are interested in. If a scientist is not interested in learning that there’s no purely logical reason (mechanical, yes- purely logical, no) to believe the world won’t be here tomorrow, they won’t. Not unless you can electronically increase philisophical “reach”- show the natural ends of lines of thought in a way that the scientist trusts accounts for critical ideas.

Define external redirect: BrunoLatour ScientificMethod SocialHierarchy PanTheism

EditNearLinks: AiWiki ImposeRationality

Languages: