KnowledgeIsBasedInTrust

The idea is that “knowledge” is based in trust, not in reality.

That is, if you feel that you know something, what we really mean is that you trust 1 that the thing is true, rather than that you actually have some sort of super-natural clarity about the universe.

The statement is two-fold:

FunctionalKnowledge

We can talk, then, about a sort of “functional knowledge.”

FunctionalKnowledgeImage

People can believe false things; it does not matter: there is something that they claim to “know,” and that sense of knowing is based in the trust they hold for whatever it is that they claim to know.

Since people work with this something, we can call it a “functional knowledge.”

AbsoluteKnowledge

We can contrast that with “absolute knowledge.”

AbsoluteKnowledgeImage

Something that is known, known because it was communicated from a truth, and the trust in that communication is justifiable.

Do You Know Anything?

We absolutely positively cannot know that the universe is real, that the universe was not created randomly just 1 second ago.

This is not a vague theoretical possibility, some idea only invoked by argumentative idiots; This is a living reality of life.

And the direct consequence of this is that you don’t know jack **** about reality.

This should humble anybody; Generally, it does not.

Do give some time in your life recognizing that the whole thing is just a dream, just to give balance to all the time you act as if it is not. It is the rational thing to do, after all, if you are at all interested in living sanely, in accordance with logic.

The reason we are not humble, however, is because knowledge is a form of trust, and the things living in worlds do well to trust in their mediums.

The Trust of Mathematics

When you do a 100 page long proof, showing how to get from a set of axioms, to a single conclusion, you do so in the trust that the ArgumentPyramid holds integrity.

If we could perform verification over the whole pyramid at once, we might not have to assert any trust– we could just see the completeness and clarity of all steps at once, and thus say: “The whole thing is clear as I see it.”

However, you’d have to take care, that the communication of the results, from all the verifying points, happened with no distance between them. If the different parts of the brain all saw their parts as true at the same time, they would still have to collect together to a central ring, in order to see that all the other parts voted “clear” as well, and one of those messengers could be corrupted on the way to the meeting point.

It is an interesting quesiton to ask: “When I see that that 1+1 and 2 are just different names for the same thing, is that trust?” I won’t try to answer that. I’ll just acknowledge the question.

What is interesting and that I’ll address, is that we delegate trust to the checking process. We check one part of the ArgumentPyramid, and then we say to ourselves, “This checks out, it is clear, I accept the validity of this,” and then we trust that feeling.

Error can creep up, and the longer the proof, or theorem, or whatever, the greater the likelihood of error.

We have ways around this, we call it “peer review.” We get other people to check the work, and the more of them that check it and say “looks good,” the greater trust we have in the conclusions.

We have recently begun employing machines in the checking process. We have good ways of checking that the machines themselves are trustworthy, and then we trust the machines far more than we do humans, and so machines are set to the task of establishing knowledge. (That is, they are set to the task of establishing trust.)

Do retain skepticism, and remember that knowledge does not mean reality; It merely means trust.

Science is a Trust Network

The body of science is actually an enormous trust network.

BrunoLatour? does a great job focusing on the processes that it works by, and how scientific controversy is worked through, and the human processes that go into determining what gets researched and what gets invented and what it says about scientific culture.

But we focus here on science as a more abstract body: It is like a multi-tentacled beast, performing tons of experiments, and a cybernetic infrastructure for peer reviewing experiments and lines of reasoning so as to maintain the work of science.

It is very critical to recognize that science is a trust network, and that science is only done with the trust network protocols are maintained. If we lost the trust network, if the protocols are not followed, then it is not science.

Just like you can’t check a long proof if you don’t have some way of remembering what was valid or invalid in the other parts of the proof, you can’t do science if you don’t have some way of trusting that the other people’s work is valid or invalid.

It’s interesting that the system does not have to be perfect. People can lie and fib, and the system will still work, eventually. Because you can catch people at the lies and the fibs, with time, and enough people following protocol.

It’s easy to demonstrate that knowledge is based in trust; Just ask a professor of physics about some question, in depth, that is outside their area of research. Wait for the professor to give an equation. Then immediately doubt it. Express the doubt, and listen for the response. Physics is so insanely large & complicated, there’s no way any human can go on for very deep. If your professor is honest, you’ll eventually hit: “I don’t know, but I believe it; I can connect you with books or people who can answer that question.” The reason the professor believes in “PV=nRT,” or whatever, is because the professor is part of a very large trust network, that stretches throughout history, and that has at its extremeties direct expriments with reality, and manual checking of logical proofs.

See Also

CategoryReasoning

Discussion

I think the idea of this page is common to radical constructivism departing from the idea of a (more or less objective) reality. This way everything just becomes a matter of agreement and of people trusting each other. This looks innocent, even skeptical in a sympathetical way, but it is desastrous. It means that if we agree (or made to agree by media or advertising or propaganda) that owing a car makes us happy, or that Jews are not part part of society and can be eradicated - there remains no firm philosophical ground to withstand. Yes, it is true that “truth” is a difficult maybe paradoxial concept (also see WikiIsParadoxical), but there is no future in giving it up.

Knowledge is not based in trust. Trust is an aspect of the transfer of knowledge. Everybody should understand the difference. If it were different, Robinson on his island couldn’t have had any knowledge about this island and his life there, because there was no-one trusted from whom he could have gotten his knowledge.

HelmutLeitner

What, Robinson can’t trust that if he kicks a pinapple tree, his toe will get stubbed, simply because he has no one to trust?

(I’ll address the others, I just want to iron out this one, first.)

thanks for writing this page. it expresses how i already thought about science but consistently using the language of trust networks, which i think is a step forward.

it also sort of implies something i’ve been interested in for awhile; that maybe we can use some of the same algorithms in building knowledge systems inside one artificial mind as we use to build knowledge systems in society.

i agree with Helmut’s concern. And I agree that “Trust is an aspect of the transfer of knowledge.”. But I wouldn’t go so far as to say “Knowledge is not based in trust.”. Most of what we consider true is transferred knowledge, and is therefore based in trust.

Also, Lion’s idea of one part of a mind having to “trust” the results of an intermediate computation done by another part is interesting and I bet it’s correct.

But I guess what’s missing is that (for those of us whose epistemology includes an objective reality) knowledge is not ONLY based in trust. You might decide a statement is false even if you trust its source, and you might decide “from first principals” that something is true.

As Lion points out, it is possible to represent this within the trust-based epistemology given above:

  • direct knowledge from sensory experiences is included by saying “you trust your senses”; as Lion puts it, the trust network “has at its extremeties direct expriments with reality”
  • if society says that cars make us happy, one can still say, “I don’t believe that because I trust my own sense of what makes me happy more than I trust society’s conclusions”
  • if everyone agrees that Jews are not part of society, one can still say, “I don’t believe that because I trust my conscience more than I trust society’s consensus”, or “I don’t believe that because I trust certain ethical “axioms” more than I trust society’s consensus, and those axioms contradict this statement”.

Another way of putting this; the algorithm we are talking about is a large network of pipes. Through these pipes flow propositions. There are sources and sinks at the extremities of the network. The sources are sensory data, axiomatic statements that we intuitively trust, and more generally the results of internal mental processes that we can’t directly observe but that produce outputs that we implicitly trust. The sinks are computations that are querying the network, asking various questions such as “does my toe hurt?” “do cars make me happy?”, “if i drop this apple, will it go up or down?”, and “are jews part of society?”.

The network itself consists of connections between various information processing entities; some of these are internal processes within a mind or within a person (verifying each step of a mathematical proof), some of them are other people or other “cybernetic entities” (“the man on the TV ad said that buying a car made him happy”; “my professor said that F = ma”; “Wikipedia said that the United States is in North America”). At the end of each segment of pipe is a spigot; this represents the fact that each information processing entity is free to accept or reject propositions coming from that pipe; for instance, if the man on the TV ad says that buying a car made him happy, I am free to accept that (because I trust him), or reject it (because I think he was just being paid to say that).

This is simplifying a bit because in reality, you also trust some propositions and not others from the same source based on what else you have heard. I.e. if one of my friends tells me he broke up with his girlfriend I’d believe it, but if he told me that the speed of light is 1000 mph I would not believe him because that proposition conflicts with other things that I’ve heard from sources that I trust more about physics than I trust that friend.

Yes yes yes!

This AI stuff is a big part of my line of thinking! I have some other destinations as well.

To address some of what Helmut is saying, and also leading into AI stuff:

  • The “knowledge” spoken of, in this sense, is not necessarily reality. The participants could live in a VR world, for example, as brains in a vat. The objects of their sensation could be entirely imaginary.
  • The “knowledge” could be false. We are talking about the constructions people (or AIs) have constructed in their minds.
  • Most of this knowledge is defeasible, and also fragmentary.

So in this particular message, I am talking about knowledge from a more “coherentist” perspective, and thinking more along the lines of “is there a justification process.”

When I wrote WhatKnowledgeDeservesTrust, I talked about it in terms of “absolutely justifiable knowledge,” which is knowledge that can pass beyond both the dream hypothesis (world is a dream, world is a simulation, world was created 5 seconds ago,) and the evil genius hypothesis (memory is not trustable, clear intuition (1+1=2) can be bogus, by way of mental infiltration.)

What these things do is expose faiths in our knowledge, such as faith that the laws of physics are the same all the time. It is important to know that we don’t really know.

God I wish I had my SVG wiki and a tablet. I have a ton of diagrams here, and the ContentRouting is the bottleneck, not the construction of diagrams.



People naturally want to know who they are talking to. But perhaps this is just a solution to a deeper problem.

I suspect that even if we had a infallible method of identifying who any message came from, we would still have other problems. Sometimes words come out of someone’s mouth that should be ignored (“It’s not him talking, it’s the booze”). Sometimes people make mistakes (“The horseless carriage will never catch on.” – Wilbur Wright.) (“I have plenty of money in my checking account” – lots of check-bouncing people). Sometimes people we don’t trust happen to say things that turn out to be true (or at least useful).

I suspect that if we find ways to mitigate those other problems, we wouldn’t care who exactly said a particular message.

Perhaps there are better solutions to the deeper problem.

DavidCary

:I suspect that if we find ways to mitigate those other problems, we wouldn’t care who exactly said a particular message.

?!

I think that the way we “mitigate those other problems” is by attaching reputations to people. When we do that on the internet, that’s InternetBonding.

Attaching reputations to people is certainly one way to mitigate the problem of people believing a statement when it is false, or rejecting a statement even when it is true. (And the problem of people wasting time on statements that aren’t helpful: IgnorePeople )

As AlexSchroeder said, the issue is trust.

But there are other ones. For example, I tend to trust articles in certain magazines and on WikiPedia (except on April 1). The magazine as a whole has a good reputation. But there is no one person in particular I can name and say “I trust this article because I trust that person, who endorses it.”

I suppose one could argue that that there really is one person (perhaps the “chief editor” of the magazine) that is responsible for the accuracy of that magazine. And that’s the person I trust, even though I associate his reputation with the magazine name, rather than his given name.

But I enjoy reading several authors who write both fictional novels and factual magazine articles.

Why is it that I believe them when they are published in a (non-fiction) magazine, but not when I read their books ?

Seriously. I know I do trust certain sources, but I’m finding it hard to verbalize why. If we had a list here of “ways to find out if an article is believable or not”, then we could build WikiAffordances to support people in using those ways. (One of those ways would certainly be “if someone I trust believes/endorses the article”, or more generally, “if there is a WebOfTrust from me to someone else who believes the article”).

(EditHint: this is drifting away from the topic of InternetBonding: proving that a group of messages were written by, or at least endorsed by, a single person … move to a better page ? One that deals with “trust in general”, with links to both this page and TrustedLinkLanguage)

DavidCary


I would love to dedicate my time to articulating the different positions of epistemology in diagrams.

Many people see arguments (the Coherentists vs. the non-Coherentists) where actually, they are just talking about different parts of Epistemological reasoning. (The coherentist is asking: “Is the theory in this persons knowledge coherent. Does it make possible functional manipulation of the environment.” The non-coherentist is asking: “Is it real? Is the knowledge justified” They are radically different questions, and you use the different questions to solve different problems.)

Unfortunately, I don’t think diagramming this all out is in my cards.

Footnotes:

1. Note that “trust” does not require that the trust be in a person. You can “trust” that it will hurt to kick a wall, without having to trust any particular person saying that it’s so. Alternatiely, we could say: “People trust their mind.”

Define external redirect: BrunoLatour

EditNearLinks: WikiIsParadoxical WebOfTrust

Languages: