ThinkingGoo

This page is about:

It’s by LionKimbro, and represents his ideas on the subject.

LineOfThinking

AnImaginedDiagramOfReasoningImage

This diagram represents a line of thinking. Or perhaps I should say: “A tree of thinking,” or “a graph of thinking.”

You have ideas, ideas link to other ideas, and we say things like: “Because A, therefor B. And because B, we know that C is not the case, nor D, but E is the case, and E leads to F.”

People who do logic for a living probably have better ways of making diagrams from webs of reasoning; All that’s important is that you get the idea that there are ideas, logical connections between ideas, that we hold reasons in our head that connect them, and so on. The solid lines represent logic, going from step to step.

Update: It’s not strictly logic that we’re talking about here. Lines of causality are important here as well. If someone reasons, “If a storm hits, then some number of telephone poles will likely fall down,” – then that’s a causal sequence, that may appear in a line of reasoning.

GapsInLogic

A lot of people believe they think “logically.”

That is, for all those little lines between the points, they believe: “This logically follows from that. And that follows from still the other thing.” And so on, and so forth. “If you disagree with me, it’s because you’re illogical.”

I think, in fact, everybody’s lines of reasoning look much more like this:

GapsInLogicalReasoningImage

The same tree structure, but now we have these little gaps everywhere.

Conclusions look like they follow from their premises, “Oh, this looks like it leads to that other thing, but really, it doesn’t. Not actually. Not absolutely. But it sure does make sense.”

Yes? We had this impeccable chain of reasoning up to this one point, and then there was this little itty bitty miniscule gap, and we just: make a short hop over it (not going to harm anybody,) and we continue on the other side, with another iron clad set of solid reasoning.

Except: logic doesn’t work that way.

If you have a single gap in the chain, logic no longer asserts anything beyond the chain.

In fact, working backwards, if you had a single gap leading up to the present point, then that part of the chain doesn’t necessarily follow, either!

It’s called "non-sequitur," or "It does not follow."

Logic can only, at best, give us little islands of stability. There is something of a ConservationOfRationality, or at least an rationality is on an economy, because: It requires time and calculation in order to perform eliminations that discriminate the logical from the illogical.

These little islands can be internally valid, but beyond them, they are mute. They say nothing. And since these networks always have some edge, some boundary, they are always limited. (See: WhatKnowledgeDeservesTrust, for more on these limits.)

Those little islands of idea, those little islands of sense, can float off independently.

So this leads, finally, to the “thinking goo.”

ThinkingGooImage

The “thinking goo” is a goo that holds a bunch of ideas in place.

It’s tempting to call it the “muscle, organs, and tissue,” and the reasoning inside the “skeleton.” But that would be a dangerous mistake. Because it’s actually the other way around. The “thinking goo” is the skeleton that holds the whole thing together; The logic and reasoning are actually more like the organs and muscle and tissue: it’d all crumble apart on the floor in a big sloppy mess, were it not for the thinking goo holding it all together..! (Perhaps someone can think up a better metaphor that I can use here?)

I honestly don’t know just what the thinking goo is made of.

It could be made of a lot of things:

StableReasoning

Is the situation really so bleak?!?

Given this story, we might be led to believe that computers do not work, we’ll all levitate come 2012, nobody knows anything, evolution is a vile lie, the guy on the street corner makes just as much sense as the guy in the lab coat, truth is determined by voting, and we can all believe whatever we want.

Nah.

This just highlights a few things.

The last one is interesting: “Stable human structures do not require logic in order to function.”

It seems that they do need something, though.

What they need, is “thinking goo,” whatever that is.

WorldView

World views are primarily made of the thinking goo.

Science is a special case, when applied towards the nature of the empirical world.

Mathematics is also a special case, since you’re applying it against nothing. It’s just made of pure chains of strongly trustable reasoning. (Again, see: WhatKnowledgeDeservesTrust.)

But step ever so slightly out of science, into meaning, or interpretation, and you’re back to thinking goo, even with science. SuperFreudianism is, unfortunately, a common mis-interpretation, based on the mis-application of heuristics.

ReasonVsEmotion

A brief word against a common misconception-

Many people have heard that a mind is made of “reason,” which is vs. “emotion.” Therefor, what is not logical, is therefor emotional.

Equating reasoning with logic, in the first place, is kind of batty to me. Reasoning seems to be a sequence of imagining up possibilities, weighing them against one another, and using logic to wipe out possibilities. Logic is only one part of the process.

And thinking that emotion is “the other thing,” is even crazier. Because there’s always a wild array of things going on in our heads, and reasoning works through far more than eliminations and explorations spured by logic. And there are so many more things that can tweak you or guide you straight, than just emotion. For instance, trust.

Quotes

See Also

CategoryReasoning CategoryInformationManagement

Discussion

I have written this, and I’ve drawn a lot of diagrams, but I do not entirely believe it. In fact, part of the reason I write this, is to be refuted.

For instance: I could see an argument that world views, more often then not, have high integrity. They just fly in the face of evidence, which the person is not seeing.

I could also see a strong case that people do not posess singular world views, but rather possess networked world views, with layers upon layers of views. Whether people are conscious of it this way, (most commonly, probably not:) is irrelevant. But then, what is the stuff binding the networks together? It is ThinkingGoo.

When I read an economist, or have a conversation with an economist, it all falls apart. “Reason, you are but a dream.”

I suppose you could say: I am SelectivelyOpenMinded about this view.

I strongly believe that everyone who thinks they are logical should take a good long hard look at their lives, and see if it makes any sense from a purely logical perspective.

Pretty interesting. This goo to me seems to create the forces to make people mind the gap. ;) Seriously!

Lion, it’s wierd and maybe it is not comforting but … I fully agree.

The first point is that people think that they are 90% rational and logic and 10% emotions, prejudice, genetic predermination and instinct. But it’s exactly the other way round. Most of the time there are given settings, decisions, acts that are rationalized and put into a framework so they seem to make logical sense. That can be taken as an experimental fact. One could discuss this in detail and why this is the case.

You write something like “the world-view is made of this glue” but probably either “the world view is the glue” or “the logic is used as the glue to make the broken world-view parts fit together”.

Just my $0.02 as always.

Well, one nice plausible conclusion from this argument, if I may apply some ThinkingGoo to satisfy a desired result:

  • It can make it easier to be tolerant.
  • It can support a more relaxed way of life.
  • It opens the door to IntegrateImagination.

Another nice thing is it makes it easier to justify support in science. (The things that science is actually good at, that is.) And to talk about just what those things in science are good at, too.

Because what we’re not saying here, is that logic is fundamentally broken, or that science is weak.

I suspect that if I think about this more, that I could discern some major varieties of thinking goo.

For example, I think that “trust” is a special case.

On the macroscopic level, we have trust in these institutions, these organisms, that collect and study empirical evidence.

On the microscopic level, we trust, “Oh, yes, I remembered to lock the door after I left for work,” and stuff like that, to the degree that we don’t worry about it.

It seems that intuition has some capability as well: If you don’t lock your door, does your intuition alert you? Someone clever out there, has to have figured out a way to measure this, respecting that just asking people over and over will influence the way that they behave.

We may be able to gain deep insight into WhatKnowledgeDeservesTrust. We may build some model that says, “Okay, assuming you don’t think there’s a Cthulu monster trying to prevent you from verifying mathematics, …”

One guy argued that in the next few decades, people will grow more to think about how trustworthy a particular source of information is, and why. That makes sense to me, since we will have hoards of sources of information, and we now have open conflict over legitimacy. And lots of struggles right now (I’m thinking: revelation vs. empiricism) about what we know and how we know it.

It makes sense to me that this territory may well be explored, and popularly understood.

I don’t know.

It seems like a good idea to me.

What we have the chance to do is - even without understanding what the goo is - to create circumstances that make it turn one goo, one big soup, this definetitely leads us across multilinguality and massive translation, and - to add something futuristic - over a base translation of whatever intelligent to lojban, the only language an automatic translation to whatever natural language can be halfways precise.

There is no precise language and no precise translation. No hope in that direction.

Well, Esperanto is as good as you can get.

(Eeek ! Not a Lobjan-Esperanto flamewar !)

I don’t agree with the “broken chains of reasoning” illustration up there. I’d rather think in terms of probabilities and fuzzy logic - how much trust we give a proposition.

For example, if we have /”(A and B) ⇒ 90% chance of C”/, /”(C and D) ⇒ 90% chance of E”/, and we believe that A, B and D with 90% certainty each, then we should get something like:

 A(90%)
       \
        (A&B)(81%) -> C(73%)
       /                    \
 B(90%)                      (C&D)(66%) -> E(59%)
                            /
                      D(90%)

However, our heads more often have something like :

 A(90%)
       \
        (A&B)(90%) -> C(90%)
       /                    \
 B(90%)                      (C&D)(90%) -> E(90%)
                            /
                      D(90%)

… which, well, is way too optimistic.

(The mathematical model could be improved :P)

So, it’s not that there are tiny gaps that we don’t see, but rather that even if all the parts are solid, it’s still fairly likely you’re wrong somewhere. Multiply the graph above by a worldview built, down deep, on loads of not-100%-reliable propositions like “my dad knew a lot about economics”, “Americans are generally happier than Africans” at the bottom of your tree, and you understand how unreliable a worldview can get, even if all of your assumptions and links have 99% confidence.

So it’s safe to assume I’m probably wrong on 2 or 3 major things :) One problem with excessive rationalism is that it tends to see boolean logic where things are in fact awfully fuzzy.

Thanks for this page Lion, one of my major long-term project ideas is to study and model what you call the “goo”. I had never thought of it as a tree diagram with tiny “gaps”, though, that might allow parts of the tree to float around (visually this reminds me of how DNA floats around in the Adleman's biological computation prototype). Your picture helps!

Btw, in the picture ThinkingGooImage, at the end of the left-hand side, do you mean “they don’t”, or “they aren’t”?

In a standard mathematical proof, yes, a single gap in the argument makes the entire proof invalid. In a mathematical proof, either the entire chain of logic is air-tight, iron-clad true, or you have nothing.

There is another mathematically valid technique that works even when it’s impossible to get an air-tight argument. But for this benefit, we sacrifice absolute certainty.

The most enthusiastic promotion of this technique that I know about is: Sl4:UntanglingCognition/PowerBehindScience .

Lion, does this sound like what you’ve been hunting for?

Neat essay! I’m an adept. Maybe I should adapt my graphs up there to have a bit more Bayesianism in them … after all, it’s a big part of fuzzy logic (And I’m not happy with the term “Fuzzy Logic”, it makes it sound like some kind of clever hack for programming robot movements and stuff, whereas it’s the way a lot of our reasoning works).

The bit about having the right words and the wrong word was certainly well expressed. I wonder how many of the concepts we’re struggling with on this wiki are the equivalents of Phlogiston and Elan Vital? And I definitely liked the bit about the problem with talking about “Science” and “scientists” …

Sometimes I think that philosophy professors should be sent away to work at the post office for something, and have AI researchers teach philosophy instead. It would probably be a bad idea, though.

Ha!

I was reading the page, and at the beginning, I was thinking: “Hunh. This is kind of interesting.” I got about a third through, and I found myself thinking: “Okay. Where the hell is this guy going?!

So I skipped down to the bottom.

“Noooooooooooooo!!”

I was very disappointed.

I’m not a Bayes man, myself.

And, BTW, I’ve used this page on Bayes as primary reference in how NOT to explain Bayes theorem.

In fact, I’ve written a 5 page (tall pages) VisualLanguage explanation of Bayes Theorem, which I’d be happy to upload. (And it’s been requested that I do so, by others, who I’ve shown it to.)

Bayes Theorem is utterly non-mystical, and easily explained, if you know how to explain it.

One of my big problems with Eliezer’s explanation is that it is sufficiently unintelligable, that it easily lends itself to mysticism and pseudorationalism, exactly the kind that this page talks about. On SL4, there’s elsewhere an excellent (if lengthy..!) criticism of Bayesian Totalism. I’d recommend looking at that one as well.

I’m sorry to announce: Our world is utterly irrational. The problem isn’t just ignorance: It’s total. Rationality only exists in small pockets. Only.

Are you referring to SL4:BayesCriticised ? Or perhaps SL4:ProbabilityTheoryDiscussion ?

I’d agree that that page on Bayes isn’t the best explanation you could hope for, it doesn’t live up to it’s promises, and I gave up after a moment. It has some funny bits, but the explanation could do with more pictures, and better pictures (For one, I still don’t understand what that Java applet thing was actually displaying).

But, I still think Bayes is worth mentioning - a lot of science is built on Bayesianism. Bayes is a bit like a micro, dehidrated scientific method (or at least, one of the ingredients of the scientific method). To understand the nature of scientific “truths”, you probably need to understand Bayes. And many people don’t.

And, it’s probably a major ingredient for reputation-based systems, trust metrics, weird voting systems …

That’s not to say that there isn’t still a whole lot of irrationality floating around, mind you.

And, Lion, you should probably read that first link anyway :) There’s some pretty interesting stuff on it anyway, and it isn’t really about Bayes. I found it had mostly intersting stuff to say about words. And, well, it’s pretty well written (OK, and it may not have much to do with ThinkingGoo either).

I’ll read it. I like the kind of reasoning at the beginning, and the consideration of “is science magic?” “What’s the definition of magic? Science?”

But: “To understand the nature of scientific “truths”, you probably need to understand Bayes. And many people don’t.” – I disagree strongly.

Here’s why:

  • You can communicate the essence of the scientific process, completely, and in ways that people can understand and agree with, entirely in human terms, without symbols, basicly, without losing anything of essential substance.
  • It does not require Bayes in the slightest.
  • Further, this description would be far more complete than an explanation revolving around Bayes Theorem.

Bayes Theorem is a mathematical formula that describes backwards probabilities over artificial environments (mathematical constructs.)

Let’s say you made a mathematical model (complete with impressive looking symbols) of majority voting. And then you said: “You cannot know anything of profound substance about Democracy, until you understand this model. And then once you understand this model, you will have the profound insight of Democracy, that the world is slowly coming to terms with, and you will be part of a cognitive elite, that shall form the foundations of the future of Democracy. Labor to understand this model, and labor to teach this model, and we will be light years ahead in the advancement of Democracy.”

And, you know… You’re missing the 9,999 other things about Democracy, and insanely inflating the 1 thing about Democracy that you’re laboring too hard to explain.

(This here message has been cut to about 20% of it’s former glory.)

But then, how can I look down upon people who don’t understand Bayes Theorem (Or understand and apply it without going into equations or calling it by that name)?

(non-sequitor: I wrote this at the same time as Emile. I’ve just resolved the edit conflict.)

I realize just now that I’ve ommitted a major part of my “Thinking Goo” story; Something that I frequently tell people when I tell the story. It’s this:

The thinking good is the substance of thought & memory & action & emotion. So the thinking goo itself is a generative force.

We have been focusing so far on the negative element of “brokenness,” in terms of WhatKnowledgeIsWorthyOfTrust?.

But there’s another story here: “What Works.” And ThinkingGoo, very clearly, “works.”

Part of what I’m doing here is giving a validation for metaphysics. Because the metaphysical structure, made out of these disjointed (wholely internally rational) elements, become the support infrastructure for ThinkingGoo.

In some ways, these metaphysical structures allow you to transport your thinking and actions to realms that are wholely impossible to reach without them, yet that are objectively, rationally, measurably, material and real.

Let’s say a person is unsatisfied with their personal relationships. We’re all there at some point or another.

Unsurprisingly, there are countless metaphysics for understanding personal relationships. You’ve got the I Ching, Astrology, the Enneagram, Vedantic Astrology, Chinese Fortune Telling, Palm Reading, the Tarot, the Kaballah, and a gadjillion others. Every single one of these will have a way of telling you something insightful (and, frequently, entirely contradictory to each other,) and they can tell you something that will “work” as far as your life goes.

Well, BusinessIsBasedInTrust?, and if something works, then it is used. Without the metaphysical structures, you have very little to go on. You don’t have a ThinkingGoo.

(Some people would argue: “I don’t use metaphysics to model my relationships with people.” I would disagree: They are using a metaphysics, it’s just not fanciful. But it is no more or less grounded in reality. “Ah-hah, but mine is based in psychological research.” But ah-hah, psychological research grants only fragmented insights into these things, and (say) Astrology’s lines of reasoning that are not attached to symbols and dates are highly grounded in practical wisdom around the interactions of roles. That is, the one is valued too much, the other, too little.)

So:

  • Our world view is only very weakly attached to logic.
  • But, that’s not really so important, as far as getting things done in the world.

Yes, that Yudkowsky is entirely too enthusiastic about that trivial formula.

Therefore, he must be entirely wrong. :-)

First the thinking goo helps us suspect that there is a path, somehow, from A to B. We find some pieces that connect to A, and some pieces that connect to B, and try to get them join in the middle. And we gather other pieces of a chain of logic that seem like they might be relevant. Then you try to bolt it in place using binary logic. If that doesn’t work, then you try to weakly stick it in place with Bayes or fuzzy logic.

It’s mathematically possible to solve any jigsaw puzzle by

  • start with a random piece,
  • check every other piece in the entire box in all 4 orientations until you find one that fits
  • repeat with every free edge in the growing cluster.

But most people use a different method to put together a jigsaw puzzle – one that adapts to the particular puzzle in a combination of intelligence, memory (but not “perfect” memorization), and intuition.

“Science is facts. Just as houses are made of stones, so is science made of facts. But a pile of stones is not a house. And a collection of facts is not necessarily science.” – Jules Henri Poincare

Mr. Sock, you miss the mark. Yudkowsky’s Bayesian pages are more comparable to a person who believes that voting is the heart and the key and the hub of Democracy: It is not, but it is a piece. So he is not entirely wrong; Merely very difficult.

David, I agree completely. We practice "opportunistic problem solving." This also ties in with SpeculativeEngineering: Entirely reasonable and sensible, in my book. You do not have to say: “I am solving this specific problem,” in order to be performing useful work. I’ve been wanting to write a page OpportunisticProblemSolving? for a while now; Perhaps you would like to write it?

I think it’s a real good point, and have wanted to build on it.

David, whatever Pointcare says, science is not facts. Otherwise the facts that “the sun is shining hot today” or “my socks need washing” would create science. This has nothing to do with the ordering of facts (“pile of”).

Science is a collection of theories, which are more or less well tested. Some facts are used to test theories. So relevant facts are used for science, so their collection is part of the scientific method, but facts definitely are not science.

I also like lion’s friction idea because it physicaly predicts a consistence of the thinking-goo - not liquid, not solid neither - that seems right to me. Molecules on a friction that dissolve easier, entangled structures, that makes some gallert, to a certain degree resistant, but if once moving turning more and more liqid until quiete enough again to rebuild a certain solidity. Think about big ideas, Kopernikus, Darwin, what a rush in the goo they caused. Not that the friction thing is nesseccarly right. But the model seems good to me.


I just read Why People are Irrational about Politics, and it’s pretty interesting. I’m not sure it belongs here (maybe in PassagesOfPerspective ? SelectivelyOpenMinded ?). Basically, it explores different reasons for why people disagree a lot about politics, and keeps “rational irrationalism” as the best explanation (though self-interest and different values). - basically it’s in one’s “economic” rational interest not to be perfectly “logically” rational about our opinions. It takes a lot of effort (high cost) to know which of several political programs is actually the best, and the benefits of being well-informed are low (even if you vote the “right” way, it’s still just one vote among millions).

Quote:

The problem of political irrationality is the greatest social problem humanity faces. It is a greater problem than crime, drug addiction, or even world poverty, because it is a problem that prevents us from solving other problems. Before we can solve the problem of poverty, we must first have correct beliefs about poverty, about what causes it, what reduces it, and what the side effects of alternative policies are. If our beliefs about those things are being guided by the social group we want to fit into, the self-image we want to maintain, the desire to avoid admitting to having been wrong in the past, and so on, then it would be pure accident if enough of us were to actually form correct beliefs to solve the problem. Analogy: suppose your doctor, after diagnosing your illness, picks a medical procedure to perform on you from a hat. You would be lucky if the procedure chosen didn’t worsen your condition.

(Maybe this could be on it’s own page, or on the talk page? I put it here mostly because in the pseudo-cluster of related pages, it’s the last active one.

Thoughts?

Wow. What a perfect and awesome link.

Please work it into the see-also’s?

Or the article itself?

I made it into it’s own page: RationalIrrationality.

LionKimbro, do you still have that VisualLanguage explanation of Bayes Theorem ? Could you stick it online somehere (VisualWiki ?) and drop a link here?

I agree that there are lots of other things going on in the process of science – and in the process of thinking in general – that Yudkowsky completely neglects. I’m glad we are beginning to talk about a few of them. (Is there a wiki on “intelligence” that would be better for talking about all of them? Perhaps the artificial intelligence wiki ?)

The vast majority of the time, people are thinking about things that they are unable to mathematically prove (or mathematically disprove). Even mathematicians typically suspect something is true, long before they have a sketch of the outlines of a proof; and even after they have the outline, it requires a bit of creativity to fill in the gaps.

Alas, most of the time people are unable to describe this “creativity”. What is this Thinking Goo stuff?

  • Some people feel it is indescribable – a “gut feel”, a “woman’s intuition”, a “transcendental revelation”.
  • Other people feel that creativity is (probably) perfectly explainable. Unfortunately, it happens in a part of the mind that is not connected up in a way that we are consciously aware of its working. So it is difficult to figure out what it is doing and how – much like how most people have a perfectly functioning spleen, but are not conscious of what the spleen does or how it does it. Perhaps that part of the mind is doing something like “fuzzy logic” or “Bayesian logic”. But whatever it does, it is sub-conscious.

Sometimes if you train a bunch of people together, they all have pretty much the same pieces of logic in their heads, they all form the same sorts of structures with them, and they are all oblivious to the same kinds of gaps. This can be advantageous – if they are, say, officers in the military, it may be good for them all to come to the same conclusion as to the precise time to attack, even if there are other times that would make just as good a strategy (as long as they all attacked at the same time). They can also communicate new insights to each other rapidly – one can give just enough information to specify the relevant pieces that they all know, and a few new pieces, and that one knows they have enough background information to put it all together. But it may cause bad “groupthink” if they all approve of a plan that has a fatal flaw in precisely one of of the gaps they were trained to ignore.

What else can we say about this Thinking Goo stuff?

Yes, I have it, I’ll upload it some time, hopefully soon…

There are size limits on CW uploads, though; I may have to host it on Tao.

Note that: It’s still incomplete… But you should be able to get it.

One thing that I find myself hammering over and over, is that ReasonIsNotLogic?.

If you find a logical hole in someone’s argument, it doesn’t mean that their argument is unreasonable. And if you construct a logical system in your reasoning, it does not necessarily follow that your argument is reasonable (to others’ thinking.)

I talk with a lot of people on the Internet, and it’s a very common fallacy that reason = logic, and that logic = reason. And if someone’s wrong, it’s because they’re illogical, and that if someone is right, it’s because they’re logical.

So this is something to say about this Thinking Goo stuff, probably on a different page.

I think criticism of GroupThink is really just personal anger at a misunderstood system wherein participants are thinking the same way. Viewed neutrally, I don’t think that common language / worldview, or GroupThink, are any different; All that matters is: “Are we angry with their thinking, or not?” People see a kind of thing that they don’t like, and put a name for it. People see a kind of thing that they like, and put a different name for it. But it’s the same kind of thing, structurally. They just like the one, and dislike the other. Someone reasons and, by chance, they get it right: “Intelligence.” Someone reasons and, by chance, they get it wrong: “Stupidity.” But reason is reason. The PygmalionEffect follows from labeling.

You know, it would be easy for me to misinterpret what you said as accusing me of “personal anger issues”.

But I am too smart to fall for that :-).

Many people believe that emotions are “simple” and “primitive”, perhaps even “reptilian”. So one would expect emotions would be easy to simulate on a “primitive” robots. But emotional machines ( http://emotionalmachines.com/ ) are turning out to be more difficult to build than we expected.

It seems that some kinds of emotions (especially “empathy”) are far more complicated than step-by-step logical deduction.

Then there are things like “trust” and “motivation” that are essential to actually getting anything done, but (apparently) do not come from logical deduction.

I don’t think I was singling anybody out; And if I did, it wasn’t intended. And if it was intended, it was so long ago, that I can’t remember it, and I ask that you please forgive me..! :)

I also don’t believe that anger is wrong or bad (on the face of things.)

Regardless:

I think people have looked at GroupThink, and become angry with groups, either specifically, or just en mass (“Groups can’t do anything good” type sentiments.) GroupThink is a good thing to be mad at, is it not?

But I think that as we develop, we understand how things got to be that way, and then we cease to go, “Oh, GroupThink is just a particular view on how groups work.” Groups have SharedContext? and shared language and so on, and thus it’s just the way things are that they work together on a shared set of assumptions.

When we understand this, we stop saying, “Oh, that group has GroupThink, but that group has CollectiveIntelligence.” The groups likely have the same structure, and what’s really different is that the person is SelectivelyOpenMinded in agreement with the one group, and not so in agreement with the other group.

There have been efforts to articulate what GroupThink is and is not, but I personally think that these efforts are basically empty; It basically comes down to, “I don’t like how your group does things, and I reject that you don’t take my ideas more seriously.” The analysis on the Wikipedia page is mainly US presidents and their cabinets, and the study of cases where groups failed. I personally think that groups are fallible, and will succeed in some cases (CollectiveIntelligence!) and fail in others (GroupThink..,) and that we cannot really predict which groups will succeed or fail, by the weak criteria given.

There are failure patterns and success patterns, and it’s useful to find collaboration models that fit. But when people go, “Oh, that’s just GroupThink,” usually it’s complaining, rather than serious studdy.

I agree that emotions and emotional modeling is complex. I suspect its more complex than the word “emotion,” though; I think when you take that path, you quickly find yourself surrounded by something far greater and more complex than emotion, that emotion is just one facet of. IdeasLikeStarsAndSymphonies.


update:

I think the ThinkingGoo gets shape principly from the repetition of SeedIdeas that have worked in the past. These become our “hunches.”

We operate evolutionarily: “What has worked in the past?” What works, we repeat, over and over and over, getting as much milage out of them, where they can fit. These make up and inform our lines of reasoning (LineOfThinking,) and give shape and impetus and confident and enthusiasm to the ThinkingGoo.

Lion, in your mental model, can each proposition be assigned a numerical subjective probability value?

You say, It’s not strictly logic that we’re talking about here. Lines of causality are important here as well. If someone reasons, “If a storm hits, then some number of telephone poles will likely fall down,” – then that’s a causal sequence, that may appear in a line of reasoning. – i’ll refer to this stuff as strictly-justified-causal. So, there is the formal procedure of logical reasoning, there is the formal procedure of strictly-justified-causal reasoning, and then there is other stuff, called “thinking goo”, correct?

Regardless of the data representation that we “really use”, it seems to me that we can make a network representation where the nodes are propositions, and parts of the argument network are logically valid, and other parts are strictly-justified-causal valid, and the remainder is “goo”. If there were aliens who only reasoned with logic and strictly-justified-causal, then we could communicate the non-goo parts of our lines of reasoning to the aliens.

Now, can the effects of the non-logical (and non-strictly-justified-causal) cognitive processes upon the logical/strictly-justified-causal parts be represented as “guesses”, that is, nodes that we introduce, without proper logical justification or strictly-justified-causal justification, as if they were axioms (graphically, these would be like glue which hold together the disconnected arguments – but they would not be goo surrounding everything, they would just be little dabs of glue, right where the gaps occur)? Or, if not axioms (which are known to be true 100%), like axioms, but with probability values of less than 100%? That is, if we were talking to these aliens who only understand only logic and strictly-justified-causal, can we communicate effectively by encapsulating all of the weird effects of our thinking goo as guess nodes?

Or, is the idea of doing everything with subjective probabilities and guess nodes what you mean by “Bayesian totalism”, which you reject?

I want to read, understand, and respond to this, and this is just a placeholder to say, “Understanding-and-response pending.”

Also, I added: “language one thinks in” to the list of “goo” elements.

“Can each proposition be assigned a numerical subjective probability value?” Of course, that is clear. It doesn’t mean that it has any bearing whatsoever on reality – but it can, of course, be done.

Yes, I do reject Bayesian totalism, and would love to have this/that conversation.

Essentially, at some point, you have to ask yourself, “What should we do, make, or be?” At that point, holding on to Bayesianism doesn’t help you one bit. In the lower aims, it’s easy to reason “Well, how are we going to get this vehicle to that location?”, and perhaps Bayesian reasoning can help you with that. But when you ask, “Well, what is life for?”, then Bayesian reasoning doesn’t really help you at all. And it’s the answer to that question, (question’s of it’s ilk,) that are what drive the questions about where the vehicle should go. (In which case, if Bayesian methods helps one to get a better answer, great.)

Define external redirect: BusinessIsBasedInTrust OpportunisticProblemSolving SharedContext WhatKnowledgeIsWorthyOfTrust MarkTwain ReasonIsNotLogic

EditNearLinks: PygmalionEffect GroupThink

Languages: