Knowledge versus Belief
Lunch discussion led by Faculty Research Fellow Jennifer Nagel (Philosophy)
11 October 2018
Human adults “read” social scenes automatically: without conscious effort, we see other people not just as physical objects moving through the world, but as agents who see, want, and know many things about this world. This social intelligence is an amazing capacity. Seeing what someone knows or wants is not like seeing the color of their hair. We rely on remarkably subtle outward cues in calculating the inner lives of others: tiny shifts in the direction of eye gaze, fleeting facial expressions, slight differences in gestures of pointing or reaching. Meanwhile, we have very limited introspective access into how exactly we are performing this task of mental state attribution (as is the case for many other intuitive capacities, such as face recognition).
Here’s a question you might never have asked yourself: when you watch someone who (for example) sees a ball rolling into the corner pocket of a billiard table, do you see that person as believing that the ball is now in the pocket, or as knowing that the ball is in the pocket? (Or do we do both of these things at once, attributing both knowledge and belief together at the same time?) It’s not trivial to figure out which of these states we are tracking, because we generally expect very similar behavior from knowers and believers. Whether a rival player knows or just believes that the ball is in the pocket, he will reach into that pocket if he wants the ball.
Questions about exactly which mental states we are attributing are difficult, but not unanswerable. To answer them, we need to figure out the inner logic of mental state attribution. Empirical research can help, but not without making some philosophical assumptions about the character of knowledge (and belief). We may also find ourselves rethinking our philosophical theories of knowledge and belief on the basis of what we discover about how these attitudes are naturally attributed.
To understand the mature human capacity for mental state attribution, it can help to look at creatures in whom this capacity is not quite at our level, and here we discover something really interesting: knowledge is easier to track than belief. Our closest relatives in the animal kingdom, apes and monkeys, seem to be capable of recognizing whether members of their own kind do or do not know some important fact about their shared environment, especially in competitive situations. In games involving hidden food, for example, they are great at keeping track of who knows about what, and who is ignorant. But nonhuman animals can’t keep track of any beliefs that fall short of knowledge.
Human adults can easily figure out situations where a competitor will have a mistaken belief: for example, if someone sneakily takes the ball out of the pocket when his rival turns his back for a moment, an adult who watched the whole chain of events will expect the rival to reach into the pocket for the ball when she mistakenly believes it is still located. We expect the competitor to have the false belief that the object is in its original location. This ability to track belief is something that our closest animal relatives don’t share. There was a flutter of excitement in 2016 when researchers from Michael Tomasello’s lab in Germany found great apes showing patterns of anticipatory looking consistent with false belief attribution when they watched videos involving actors who had been deceived. However, in subsequent work, Tomasello has argued that these apes were only tracking earlier patterns of knowledge, and not belief itself. Belief is tricky because it doesn’t just represent how things objectively are: what you believe can differ quite radically from what others believe, and from reality itself. Getting the concept of belief involves an appreciation of the possibility of multiple conflicting perspectives on our shared reality.
Human babies also go through a stage of being able to track what others know before they can recognize beliefs falling short of knowledge (whether these are false beliefs, or beliefs that are true just by chance, as in lucky guesses). It is a very interesting question why our species ever developed the subtler ability to grasp mere beliefs. One suggestion advanced by Tomasello and others is that human beings need to have the concept of belief because we have a very special form of cooperation. Animals other than humans cooperate in projects that change features of the world, like building beaver dams. Human beings cooperate not just in doing things in the world, but also in building our understanding of the world through communication. Of course, other animals can communicate, through signals like the tail flash of a white-tailed deer, to indicate danger. But animal signaling is a one-way street (another deer can’t correct a tail-flasher who is getting it wrong, or even raise the question of whether it is making a mistake).
What is special about human signaling is that it is a cooperative back-and-forth process: we don’t just respond mechanically to signals the way white-tailed deer do. We humans can raise doubts or actively ask questions of each other, mostly helpfully sharing what we know, and sometimes spreading misconceptions. If we are going to work well with each other in this cooperative project, we need to recognize that other people can have subjective perspectives different from our own, perhaps falling short of reality. When should we trust others as telling us something objectively correct? When do their signals tell us more about their subjective perspective than about the actual shape of the world? An ability to sort out knowledge from mere belief is crucial to the project of working together to figure out the nature of reality.